Data quality is too important to ignore

I spend a lot of my time working with colleges and small businesses helping them find more and better ways to leverage their technology and information assets.  And, to be sure, many organizations are sitting on goldmines when it comes to the data that they have at their disposal.

With good data, really good things can happen:

  • The organization can leverage their data goldmine to improve sales, improve customer satisfaction and reduce the amount of time that employees have to spend on manual processes.
  • Processes can be much more easily automated and placed into workflows based on individual data elements
  • Complete organizational identity management systems can be implemented that take the pain of out account provisioning and deprovisioning.
  • The organization can make good decisions and trust that those decisions are backed by reliable, verifiable data.
  • Organizational reporting becomes as simple as pushing a button to run even the most complex reports.  Of course, the reports will have to be developed, but the ongoing execution of those reports can be trusted because the underlying data is trusted.

When data goes bad, bad things can happen:

  • Perhaps worst of all, decisions are made with bad data.  This can lead to serious business problems down the line.  Personally, I’d rather make a decision from my gut than risk making a bad decision with bad data.
  • Automated processes come to a halt or can’t be implemented.  Process automation absolutely requires clean, consistent data.  If data becomes inconsistent, it can break processes.
  • Process workflows break down or are directed incorrectly because the underlying data is incorrect.

How do you prevent these kinds of things from happening?  Frankly, you’ll never get to 100% data purity, but you should be able to get close and get at least to a point where data is consistent.  Here are a few tips:

  • For companies using enterprise resource planning (ERP) or other integrated systems, establish clear data entry standards that dictate how data is to be stored in the organization’s systems.
  • Establish strong data governance and policies with teeth.  I’ve done this before.  At a college, I created a group that held representation from all functional areas of the college and charged the group with creating and enforcing data entry standards.  Further, this group was responsible for approving significant changes in any area of the system, to include shared code tables.  No longer could a new VP of a single functional area come in and make mass changes to the shared system with no outside approval… and this had happened in the past!  Now, even new VPs need appropriate business justification to make significant changes and the changes must be run through this pseudo change management committee for approval.  This step ensures that any potential second order consequences due to the change are discussed and taken into consideration before the changes are approved.  After all, a change in a department that manages data early in the data lifecycle can have a drastic impact on departments further down the line.  People seeking approval from this group could appeal decisions to the Chief Information Officer or to the college President.  In order to codify the above, organizational policies were created and vetted through senior leadership to ensure that everyone had awareness and understood the why behind the process.
  • Never let “multiple versions of the truth” take hold.  Nothing is worse than finding out that a department has been creating shadow systems in order to manage their own workload.  These departments need to work with central organizations to ensure that there remains one version of data that can be relied upon across the organization.  If there is a need for additional fields or functionality, such needs need to be discussed as a part of a project planning effort.

Good data is simply too important to be left to chance.  Organizations must be able to trust the integrity of their data in order to handle basic business processes and any governance structures put into place must be transparent, well understood and flexible enough to adapt to changing needs, but strict enough to protect the organization’s information assets.

We created comfortable system of creating order where you can  bring to the writer all the necessary ideas and thoughts you may have about the custom essay you want to order. It will ease the work for the writer  and make you confident for the quality of the final result and instructions you want to give.

  • mlefcowitz

    It seems to me that this piece mixes up the concepts of clean data and good data, and thus mixes up its presentation of a high-level remediation process towards attempting to achieve a high probability of data quality.
    Clean data is data that is accurate and consistent. Good data is data that is relevant to the business questions being asked. The first is solved by a variety of initial data entry constraints, as well as data cleansing and data scrubbing techniques. The second is solved through context and statistical analysis techniques.
    A high-level mini-article aimed at a general readership population should – at the very least – get the basic concepts right.

  • Data Entry Assistant


    I have played Data Entry Assistant role for many companies. I frequently read several blogs to keep myself acknowledged with the advancements in this field. I really relinquish this blog post. From my years of experience, I sincerely appreciate you when you say,

    “For companies using enterprise resource planning (ERP) or other integrated systems, establish clear data entry standards that dictate how data is to be stored in the organization’s systems.”

    I sincerely welcome your blog post. It’s a very informative piece for the readers.

    Happy Blogging.

    — Data entry assistant

  • Pingback: QUALITY DATA ANALYSIS AND DISTRIBUTION | mscjohnsonsgh920()