How To Leverage Data Lakes For New Business Opportunities

Published on:

How valuable would it be for you, if at any time when you contact a client or a client contacts you, your IT intel installed capacity automatically and accurately informs you of the new additional business potential towards that client?

What Is A Data Lake?

On the early days of modern computation, computers of the 1950s, 60s, and 70s would collect data from physical repositories (cards, keyboard input, and tapes) and deal with it in memory, meaning a virtual environment. In the late 1970s, computers started to cluster data for processing within worksheet alike structures, the grandfather of Databases.

Databases were to follow and the logic behind it was for data to be input in a logical manner according to previous human classification, so although raw data (in the sense it hadn’t been previously treated) went into the database tables, the decision of what would fit where and why was based on human criteria.

Databases are in fact a limited tool since they require specific contexts, meaning there is a Database for Accounting Data which must be a distinct entity than the Database for Logistics Data (although they may need to connect and have mutual referencing structures).

So, most of the systems and tools under operation today, still have their own database structures which interconnect and exchange data towards creating added value information.

The database evolved to larger clusters of alike data, which are called Data Warehouses and, although the process may somehow differ, data is grouped in the same manner while pertaining a given common context.

With the advent of Big Data and today’s innovations in data handling/processing tools, we have reached a stage where large flows of several types of data may be channeled into extremely large data repositories (Data Lakes), in an ad-hoc manner which still allows for correlations to be established under a pre-defined “schema” where from to extract meaningful information.

The main leverage of Data Lakes pertains the following characteristics:

  • While Data Warehouses store structured (processed) data, Data Lakes also store semistructured or unstructured as well as completely raw and (apparently) unrelated data. And, no…it’s not a waste of time and space since most of the times relationships are only found after processing.
  • Data Warehouses’ architecture implies expensive large data volumes whereas Data Lakes are specifically designed for low-cost.
  • Data Warehouses obey a pre-defined structure in which data is stored (pre-defined data classes and families), while Data Lakes have greater flexibility towards dynamic structural configuration – a similar manner in which humans apprehend information and think.
  • Data which goes into Data Warehouses is mature (it has a clearly defined context), yet Data Lakes store data that is still maturing (still finding its place in the overall context).

By now, you should be realizing that without Data Lakes, the Internet of Things (IoT) or even Artificial Intelligence (AI) would be slow or even impossible technologies to implement in a cost-effective manner.

Think of it as an actual lake, into which data flows from several streams (M2M, log files, real-time data collection, CTI – you name it) and then you are able to run appropriated analytics towards it, therefore extracting added value information.

Accessing data from different sources like DB2 based systems or SAP, or any other different IT Systems requires both specific tools as well as licensing, which represents time and money.

Extracting meaningful new added value information out of those distinct IT environments represents creating a detailed map of WHAT data is relevant towards WHICH requirements and then developing code that can gather such data in the appropriate sequence to produce information that shows new meaningful content.

Let’s imagine one example where your company needs to undergo a Market Analysis on trends and tendencies of a specific group of prospects (potential clients) towards who you wish to promote a new item on your offering portfolio.

Such an exercise, if done over existing corporate systems’ existing Databases or Data Warehouses implies getting the accurate answers to the following questions:

  • Clearly identifying WHAT characterizes valid data, meaning data that pertains prospects who bear the high potential of becoming clients of such a new product or service now entering the corporate portfolio?
  • WHERE in the different corporate systems’ Data Warehouses lays such data?
  • WHICH types or families of data represent the most potential towards assertively qualifying the prospects with higher potential?
  • HOW shall the code that enables such accurate data collection and processing be developed (programming languages, processing logic leading to step stone creation of added value content and so on)?
  • WHAT are the required computing and storage resources that enable the fastest possible processing of all the data that may be found relevant?
  • WHEN could we expect the first results?
  • WHICH is the expectable percentage of “false positives”, deviations and errors?
  • HOW much will it cost? Is it worthwhile the waiting or once we get the information we need, time to market will be just history?

Data Lakes are an inevitable consequence of widespread access and integration as well as social and tech trends like Social Media or IoT will only make them more common.

However, if there was a cost-effective way to do it, by cross-referencing the data that you have about those prospects in your corporate systems (including saved conversations over the phone) with a “photographic profile” that you can extract about each of them from, let’s say Social Media, pouring everything into one single dynamically scalable repository (therefore with no need to constantly move data around between existing systems), just within a few hours or even minutes, what would that represent for the potential of successfully launching your new product or service?

A Data Lake needs to enable the features to embed in the ISASA concept which specifically represents the ability to:

  • Ingest data from several distinct flow streams through appropriate APIs or Batch processes.
  • Store such dynamically and unforeseen in size amounts of data on scalable repositories (the Lake) through all necessary protocols (NFS, CIFS, FTP, HDFS, other)
  • Analyze the data by finding the relevant correlations according to your needs and expectations.
  • Surface relevant information in a user-friendly manner that better conveys what you need to see in the most straightforward effective way.
  • Act in the most efficient and cost-effective manner leading you to reach the intended goals.

The Process

A Data Lake is not the solution to your problem, yet the IT Landscape that will allow you to get there fast and accurately, and that means as in most cases in life that you need to begin by clearly understanding the intended outcome and defining concrete tangible goals, so:

  1. Start by determining your business objectives
  2. Define and collect the data that will enable you to reach your business objectives
  3. Identify what success looks like for you

In the above-mentioned case of your company going after clients for your new product or service, these would translate to:

  1. Assertively leveraging the value proposition towards the prospects
  2. Targeting the right prospects (the ones who will most likely buy the product or service, therefore, becoming clients)
  3. Increase Sales and Revenue

Data Lake Architecture

Still considering our case, a company launching a new portfolio item, the structure that needs to be set in place will resemble the following picture:

Data Sources

The Data Sources Layer provides data streams from corporate systems or other sources of both structured or raw data via APIs or other plug-ins regardless of origin storage format.

Processing & Storage Layer

The Processing Layer starts with security towards the data streams which mean:

  • Visibility classification and control (Authorizations and Permissions)
  • Multi-level stratification: clustering, data sets, and attributes
  • Labelling towards highly sensitive classes of data
  • Authorization views

The appropriate processing structure needs to define which allows adequate processing of required data extracting valuable information. Here the steps that need to be defined:

  • Process and rules towards establishing Joins, Aggregations, and Correlations
  • Which will be Streaming and which will be Batching based
  • Error Track & Tracing, Registry, and Resolution (Data Exceptions, Error Logs, Correction Tables)
  • False Positives detection (wrong data types, duplicated data, masking)
  • Required specific transformation algorithms that provide business-specific focused new data

Integration Layer

Upon achieving the collection of valuable information and data out of the processing over the Data Lake content it must be either integrated back on corporate systems and/or made available for several types of user profiles, who according to their roles and responsibilities, will use it to leverage and gain efficiencies, such as the role of Integration and Governance.

User Interface Layer

Distinct user profiles will need to visualize data and information in different manners, some more holistic while other more detailed and exposing details and correlations.

Support Tools

As previously mentioned, although Data Lake content processing could be achieved via entirely “homemade” software tools, developing the entire required suite would represent a huge project by itself each time some analysis which requires the use of a Data Lake is in order.

Fortunately, there are several “off the shelve” tools that can be merely configured towards the herein previously mentioned stages and layers of a Data Lake workflow, therefore minimizing the internal effort towards developing code that achieves certainly required functionalities. These can be visually represented towards their specific context of actuation as shown in the picture below.

A bit of advice: tools are growing as flowers in springtime, make sure to choose not more than the necessary toolkit as per your specific needs and no single two which perform the same role.

Best Practice Towards Implementing A Data Lake

Here are the best practices key elements upon setting up the endeavor towards creating a Data Lake:

  • Focus on the architecture which best addresses available data.
  • The Data Lake needs to be created according to available data sources and type and not according to what is required. Although this may seem counterintuitive, the fact is that your data structure at the end is not totally devisable when you start the process.
  • The Data Lake design must be guided by which type of data you will be able to dispose of during processing and not what will be useful.
  • Like the only way to eat a cow is to split it into small portions, you should do the same here, meaning Discovery & Ingestion, Storage & Administration, Quality & Transformation and Visualization & Interaction must be addressed separately.

Focus On Native Data Types

  • All the architecture components must be able to deal with data in their native format.
  • All architecture components must be set in place while being fully aligned with the type of processes and procedures inherent to the specific Market Vertical requirements that the Data Lake will address.

Service & Governance

  • You must assure that the Data Lake structure and processing are capable of fast data ingestion independently of its source, not to create a bottleneck at the very beginning of the entire process.
  • Data profiling, security, and policies must be defined in advance as well as tagging, correlation rules, and workflows.

Querying

  • Proper strong and agile querying processes must be defined and set in place, or the Data Lake will not be responsive as per your Time to Market requirements.
  • Proper “guidelines” (algorithms and coding) that allows swift discovery of correlations and unification criteria, must be clearly defined in advance.
  • Analytics that mirror human expectations and goals, are most relevant to allow gathering and clustering/correlation of data that meets those human-driven.

Metadata

  • A proper Metadata Catalog needs to be defined in advance, so the Data Lake has the capacity to properly allocate data has it is being ingested.
  • The process must be fully automated in order not to cause errors os processing blockages.

Bottom line, you must “teach” the Data Lake to “think”.

Data Lake “Core Structures”

How is it possible for Google or Facebook to deal with such large colossal volumes of information simultaneously?

Do you realize that data will increase exponentially as the Internet of Things expands? We are now gathering data from smartphones, social media, cards, media, coffee machines (!!!) and so many other sources that were unthinkable just 5 years ago.

How will we be able to correlate data from such a huge number of streams in a manner that allows us to “see the big picture” in a way never seen before?

Hadoop

At the beginning of our 21st Century, Google was becoming unable to scale their “traditional” database engines and processing capabilities in any manner that could deal with the exponentially large amount of new data flowing in every second, so a new approach was developed an algorithm (called Map Reduce), that started by splitting and channelling data as it arrived towards separated processing capacities according to its classification. Meaning, XML would be forward for processing to dedicated processing clusters as well as Text, SQL, Logs, Objects and so on would flow to other separated dedicated processing clusters.

This approach was later on used to create an open source initiative that would further as faster foster the evolution of such new “Tech” in a manner that could timely generate results which could keep up with the momentum of data streams growing volumes, and Hadoop was born.

So, Hadoop allows data to be processed in parallel in extremely large volumes while maintaining existing and establishing new meaningful correlations.

Only, Hadoop was heavily dependent of coding, in Java, to adapt it towards each specific new context and to overcome the time-consuming endeavor of doing so, the market saw the rise of previously mentioned Tools (Kafka, HIVE, Spark …).

Cassandra

We can say that Cassandra is Facebook’s SQL based Hadoop, and we wouldn’t be far from being accurate.

By default, and having had its development based on a database engine architecture, Cassandra works by default in multi-node clustering mode, therefore the data is not “merely” spread across processing nodes to split the necessary workload, it is done so at the database engine level, therefore not burdening the processing with yet that additional task.

This allows processing capacity to be automatically scaled just by adding new nodes to the database cluster.

The cluster nodes are all primary nodes, all have the same degree of priority and command & control over the data handling process, which assures full fault tolerance in case of clustering downtime or communications interruption.

What Sets Them Apart

Hadoop was designed to deal with large amounts of unstructured data bearing the capacity to process them in parallel and delivering results in a fast-effective manner, and for that purpose, it is usually implemented within a single IT Landscape which further enhances its Operational Approach to Data Processing and Management.

Cassandra, on the other hand, was designed to allow redundant multi-node + multi-geographical simultaneous fault-tolerant parallel data processing capacity, nevertheless, it requires a structured data model before its implementation.

This means that Hadoop is very useful for Data Ingestion, Analytics and initial Processing, while Cassandra can enter the workflow later on allowing and assuring further detailed Processing, Security, Storing and Consultation anytime/anywhere, once the data has been cataloged in a structured manner.

Leveraging Business From Data Lakes

Coming back to our company which is launching this new product, and considering an example that comprehends a complete range of available data, meaning existing clients, how can a Data Lake supported on this technological landscape support the several teams in their roles?

The Data Lake will collect a full data catalog about the clients, like invoicing terms, overdue payments, preferences (via recorded phone conversations, the CRM system or Social Media), product handling capacity (out of the SCM corporate system) and so on.

Then, the data will be processed in a manner towards identifying high-potential prospects for the new product or service, meaning as an example: who has no open payments towards the company + has demonstrated an interest in features that the new product will deliver + has the SCM capacity to deal with it.

This information can then be delivered in different formats, either CxO/Management reporting style or on demand/on click upon some phone contact from the client side to Customer Support Hot Lines or from the company’s Sales and Marketing Department towards the prospect, immediately identifying that current client as a high potential prospect to acquire the new product or service.

How valuable would it be for you, if anytime you either contact a client or the client contacts you, your IT intel installed capacity automatically and accurately informs you of the new additional business potential towards that client?

Where Are We Heading?

Being a new field of computing, which most large corporations do not use yet since their entire IT landscape (which has been evolving over decades) is necessarily constituted by systems in “silos” which then interchange data via API or other interconnect channels, there are several expectations like:

  • Can Data Lake ease the burden of ITL? By making it either less time consuming or complex?
  • Will we be able to apply several multiple “schemas” over one given Data Lake, simultaneously, and by doing so leveraging multiple “efficiencies” over existing data?
  • Will we be able to finally reach “cross-organizational” data? Instead of having the data about one given client or partner that Accounting accesses plus other data about the same 3rd party that Logistics accesses, having a Fully Integrated Data Profile about the client, which everyone accesses according to profile permissions? Since the Data Warehouse did not do the trick, isn’t it possible that a Data Lake is just a new name for Data Mart version 2.0?

The entire concept of capturing raw data and applying distinct schemas to it (logical circumstantial processing) is a very powerful one and in fact, it is precisely what our brain does. We gather raw data and based on our knowledge (rules gathered through past experience and learning) we are able to “issue” a new set of data that adds value to what we have collected and had stored as “knowledge”.

And this has everything to do with Artificial Intelligence.

 

___________

Sharing is caring!

This article was originally published by Tenfold.