Determining the processes and procedures for the classification of electronically stored information within corporations is a topic of ongoing debate. Some claim that allowing employees to manually classify their own digital information falls in line with the prevailing practices of paper based records classification. The inherent flaw with this argument is that the prevailing practice of classifying paper records was not complicated by the ability to alter the record with a few basic keystrokes. A banker's box filled with paper records typically contains official signatures, and seals that speak to the authenticity of the record. Electronically Stored Information (ESI) is not as straight forward because it is complicated by far more factors such as meta data, embedded data, system logs, audit logs, viral replication and adaptation. These factors are just not an issue in a banker's box filled with paper records.
The most obvious complications pertaining to ESI is how to authenticate the information to be legitimate. Trusted information is a commodity in the information age. As such, how it is managed speaks volumes to the data’s authenticity.
The very nature of how ESI is stored permits information custodians to modify information with a few quick key strokes or mouse clicks. As such, being able to speak to the authenticity of ESI requires leveraging standards that provide policies, processes and controls. By capturing these rules up front and programming the rules into a technology platform designed to automatically apply them against a specified electronic data repository; we have effectively leveraged technology to provide the proper oversight and governance necessary to remediate a technology generated problem.
There is an ongoing debate as to whether corporate employees should manually classify their electronically stored information or if systems should be implemented to automatically classify electronically stored information based upon pre-defined taxonomy rules. Although some have argued that Federal Sentencing Guidelines can be interpreted in a manner to demonstrate the manual classification approach to be reasonable, it is important to recognize that the Federal Sentencing Guidelines have yet to be digitally updated in the same manner as the Federal Rules of Civil Procedure. With necessity as the mother of invention, this author suspect an update will occur in the not so distant future. In addition, in light of the availability of auto classification technology and the ability to perform scientific studies on data samples, as evidenced by NIST's TRAC program, the future of this argument seems crystal clear. As judges become better versed in the technologies developed to resolve technology generated problems, we can look to the 1932 T.J. Hooper v. Northern Barge Corp for a clue as to how history will repeat itself.
Judge Learned Hand's classic opinion in The T.J. Hooper v. Northern Barge Corporation proves somewhat instructive here. There, the defendant's tug boats were engaged in towing two of plaintiff's barges when "an easterly gale" resulted in the loss of the two barges. Had the tugs been equipped with functional receiving sets, the operators would have learned of the gales in sufficient time to seek safer waters. Although there was no industry standard of equipping the tugs with receiving sets, the court nevertheless found the defendant liable. Importantly, the court stated:
Is it then a final answer that the business had not yet generally adopted receiving sets? There are, no doubt, cases where courts seem to make the general practice of the calling the standard of proper diligence. . . . Indeed, in most cases reasonable prudence is in fact common prudence; but strictly it is never its measure; a whole calling may have unduly lagged in the adoption of new and available devices. Courts have recognized the principle set forth in The T.J. Hooper. As such, an argument can be made that the availability of technology tools that consistently follow the same rules without error, are a more reasonable approach than manual classification.
From my perspective, the approach that makes the most sense from a risk mitigation perspective is the approach that can be demonstrated to be most consistent and least error prone. Getting human beings to follow basic corporate security policies is a burden for corporations. (See: http://www.infosectoday.com/Articles/Millennial_Workforce.htm)
As such, expecting users to classify their own data as part of a defensible process is about as promising as a deck hand position on the Titantic. Automated tools developed to comply with records management and security standards represent to corporations what the radio represented to the tug boats in the Hooper case.