I'd also give a good use case for data design, a real world example that's in the news today ... ObamaCare.
The American Care Act (ACA) was designed around two primary components - a Federal Funded Exchange (and some state exchanges) or FFE and the Data Services Hub (DSH). I worked as a data architect on the DSH side, though because of the byzantine way that the organization was set up, I had comparatively little ability to push through most of what I felt made for good data design.
The biggest problem with the program was that the key decision makers felt that "software was more important than data" in a project that was ultimately built around moving data around. This meant that you had software developers writing Enterprise Service Buses and SOAP service portals (can you say Java shop?) long before anyone had any idea about what would actually be going through the pipes. The DSH, which was responsible for taking in the FFE products and performing verification with Social Security, Medicare/Medicaid, the IRS, Homeland Security, the Veterans Administration and (believe it or not) the Peace Corps was seen not as a central data repository (despite the fact that it was central to just about every process) but simply as a conduit, while the FFE (which was responsible for the front end) ended up with the databases.
When I came aboard, each developer was creating their own ad hoc XML structures, based primarily on the recommendations of business analysts who were for the most part told by the FFE that they needed this element to support their own ad hoc development process. There was no attempt made to standardize on data structures, no logical data models, not even ERWIN diagrams. Given the distributed nature of the applications, this could have actually made for a good RDF test case - consistent identifiers used across multiple systems, an open world assumption about data availability, the ability to associate multiple identifiers from different namespaces with explicit bundles and a heavily graph oriented resource distribution - but because it didn't fall into the mandates of the "software" that the system architects knew, this didn't even begin to happen. In the end, I was able to manage to get NIEM used, and even helped to push the idea of canonicalization of the data architecture, but by then the damage had been done.
The FFE began coding literally from the time the contract was inked, because this was seen as a "software" application. Everything was messages queues and pipes and services, and the specific structure and consistency of the data was seen as a non-issue. (It also meant that the GCs could start charging for butts in seats from day one.) In the end, what this meant was that web developers were working with myriads of inconsistent ad hoc data structures, typically working with them in Java rather than via transformations, and keeping as little of the relevant data out of databases as possible. You couldn't optimize your interface design, your data types were limited to the most primitive databases out there (a big argument ensued at one point because some databases had an 18 character limit for names) and internal validation became nearly impossible. This led to matches on top of patches in the code, and each patch reduced the overall integrity of the system.
This shouldn't have been that hard of an application. If the time had been spent up front in good data design and moving towards a resource centric rather than message centric approach, the project would have been functional in six months and fully operational in ten.
So yes, I think data design is important.
Kurt Cagle
Principal Evangelist
Semantic Technologies
Avalon Consulting, LLC
443-837-8725