[wpdreams_ajaxsearchpro_results id=1 element='div']

What’s data redundancy?

[ad_1]

Data redundancy in databases creates unnecessary duplicate data that can negatively affect system function and information retrieval. Flat programs and manual data entry are particularly susceptible. Data management involves identifying and removing duplications, which can be done through system controls or software programs. Duplicate data can slow down essential functions and complicate tasks, but can be easily fixed through monitoring and removal.

Data redundancy is a situation that occurs within database systems and results in the unintentional creation of duplicate data that is not required for the database to function. While redundancy is often a desirable feature in some situations, this is not true when it comes to the function of a database. The presence of duplicate data can often have a negative effect on system function, resulting in information being returned in response to system queries that is of little use. One of the key functions of data management is identifying duplicate data and removing those duplications.

The potential for data redundancy is found in virtually any type of database program. Programs that are considered flat, such as spreadsheets, and that rely on manual data entry are particularly susceptible to duplication of information which can lead to complications when it comes to retrieving the desired information. Relational-style databases, such as sales contact databases, often include processes that help minimize the chances of unintentional duplication, such as creating two different contact files on the same contact associated with the same company. Even with the use of system controls to reduce the incidence of data redundancy, problems can still occur, making it necessary to periodically engage in data cleansing activity within a database.

At best, data redundancy means that the database is littered with information that is not essential but poses no real threat to the ability to find data when and when needed. In the worst case, the presence of duplicate data slows down essential database functions and can complicate the process of using the database to handle certain tasks. For example, using a customer database cluttered with redundant information to generate mailing labels would result in a number of duplicate levels being created, requiring the duplicates to be sorted and eliminated before the labels can be used, or take time to clean up the database before attempting to generate the labels.

Fortunately, monitoring and fixing data redundancy is something many data management systems can accomplish with relative ease. Some systems will flag duplicate data entry, making it easy to review perceived duplication and decide whether to delete it or allow it to be retained. There are even software programs that can be used to scan an existing database for duplications and automatically remove those redundant entries with relative ease.

[ad_2]