Machine Learning classifications using fingerprint and bloom filters are currently available in IIS in which we have created a number of custom classifications. There are also data quality scanner functionalities (use the characteristics of the value to detect what values seem to be outliers compared to the other values of the column) in IIS. These types of custom intelligent classifications that learns what the data rules/logic are rather than having to manually spend a large amount of time and effort collecting these requirements manually from several SMEs which isn't salable. Especially when dealing with massive amounts of data as the Chief Data Office is.
Why is it useful?
|Who would benefit from this IDEA?||As the Data Governance Product Owner within the Chief Data Office, I would like to migrate from Information Analyzer to Data Refinery and IGC to WKC to manage all my business and technical Data Quality / Data Preparation activities within our Cognitive Enterprise Data Platform and execute our CDO DG services using Data Refinery and WKC so that I can provide the necessary functionalities for our data producers and data consumers.|
How should it work?