Information Consumption and the Laws of Visualization

The Problem

“Visual excellence is that which gives the viewer the greatest number of ideas in the shortest time with the least ink in the smallest space” – Edward Tufte. Often, it is easy to aggregate data and answer clients’ needs;however, creating a concise story from the visualizations and delivering suitable insights can be more difficult. Finding appropriate, effective, and simple visuals to best represent data is a challenging task that must be approached from multiple angles. In addition to incorporating user needs, it is imperative to apply best practice UI/UX to data visualization. In particular, to truly understand and respond to the user, we have to utilize heavily researched psychological factors within our dashboards. In this article, we address these challenges and help you create impactful visuals. We are in an ever-changing dialogue with data and we want to take you through the journey of combining these human tendencies with business intelligence and analytics to create a better finished product.

Why you Click

Hedwig Von Restorff was a German psychiatrist that discovered an interesting phenomenon in her patients. When people are presented with various similar objects, lists, or pictures the one that differs from the rest is most likely to be remembered. This is the Von Restorff Effect.

dissimilar visualization

Figure 1: Visual detailing the ease at which we recognize the dissimilar.

This is a common principle in user interfaces as it is crucial in how people interact with visuals. The effect is the outcome of human’s nature to see what stands out. Therefore, you need the Von Restorff Effect in visualizations to pander to that desire and make your data shine. Implementing the Von Restorff Effect in your data visualizations within business Intelligence is a quick guarantee to guide your readers’ eyes to where you want them to go. The simplest way to do this is with call-to-action buttons. When you want a user to click on something make it stand out – Bold. Italicize. Underline.These are ways to differentiate from the rest of the visual and will encourage the reader to click and interact and is a sure-fire way to help better deliver your message. This is how we can differentiate between simple static items and dynamic call-to-action initiatives. Von Restorff’s effect relates to the serial-position effect and conditional formatting. The serial-position effect is the ability of a user to best remember the first and last items in a series. In dashboarding and business intelligence visualizations, it is crucial to conditionally format the important information near the beginning and at the end. Showing key performance indicators at the beginning draws the readers’ attention and causes them to remember these values. Implementing the Von Restorff Effect and the serial-position effect will allow your information to be more memorable, accessible, and readable.

Grouped Objects

The Gestalt principle is another law for designing effective user interfaces. The Gestalt psychologists of the Berlin school of psychology developed gestaltism. Gestaltism is a psychology of the mind that developed the pinnacles of this law. This principle states that objects that are near, or proximate to each other, tend to be grouped together. Our brains can easily associate objects close to each other, better than it does with objects spaced far apart. This clustering trait occurs because humans have a natural tendency to organize and group. In the simplest sense, if I go to a restaurant and see a group of five people sitting at the same table, I immediately—without conscious thought— assume they are friends because of their proximity. This is an innate truth to our human behavior and can be a powerful addition to visualizing data. Implementing the law of proximity into your BI dashboards will help the readers take in your information with ease and natural intuition.

Grouping visualization

Figure 2: Visual detailing the ease at which we associate the groups in the right visual, compared to the cluster of objects in the left.

Grouping relevant data categories together in your visualizations will allow the readers to cluster the information and group the insights. This can help present powerful and interactive responses from your audience by delivering the necessary feedback from a given visual. The Gestalt Principle is one such law that can tie together what would be a convoluted dashboard and create concise, exact, summarized data visuals.

Making Decisions

The amount of time it takes to make a decision after being provided withchoices isanother crucial principle of gaining appropriate insights from your analytics visualizations. Hicks and Hyman studied behavior and decision making, and they discovered that the time it takes to make a decision increases logarithmically as the number of choices increases.

visualization decisions

Figure 3: Visual detailing the different ways to navigate selections. Clearly there is an easiest method.

This law will be most important when dashboarding and visualizing your data. When creating dashboards that are actionable and have decision makers in the audience, we do not want to overload the dashboard with visuals. Instead, think of creating summary pages. These pages would be dedicated specifically to drill down to action, call-to-actions, and items that require decisions. This separation of decision and visuals will allow your readers to spend less time making choices and more time consuming the data insights. This also implores the principles of Gestalt by grouping similar entities. The decision-making law is yet another important principle that will take your visualizations from acceptable to remarkable.

Important Words

The F-Pattern is the study of how readers consume information. It is an eye-tracking study that determines how readers interacted with UI/UX. In 1997, Jakob Nielsen, a leading web usability expert discovered this pattern, noticing people generally read 25% slower on a computer screen in comparison to a printed page. The F-pattern represents where on the page a reader typically focuses, following the shape of the letter F. The two horizontal bars on the F are the top most important pieces of information. The vertical bar represents the reader tending to read the left side of the page and only the first few words.

f-pattern visualization

Figure 4: Visual detailing the eye-patterns of patients that scanned a website, creating the F-pattern.

This pattern is another crucial addition to completing a great data visualization. Connecting this final principle to your business intelligence dashboard will take it to the next level for gaining efficient and optimal insights. The best content should be anchored across the top with left corner precedence. This spot is guaranteed to get noticed. Making sure to abide by the F-Pattern will assure your visuals make a lasting a meaningful impression with your audience.

Conclusion

Proper use of the above for principles and appropriate utilization of shapes in an efficient manner will bring your business intelligence and analytics to the next level. “Our primary visual design objective will be to present content to readers in a manner that highlights what’s important, arranges it for clarity, and leads them through it in the sequence that tells the stories best.” – Stephen Few Using the principles in this article you will be able to present data in the best possible way to satisfy the needs of your audience. Most importantly, the value of dashboarding can be better realized from end-users as the process of obtaining insights then aligns with their natural human tendencies. Follow along with our team for a webinar detailing these dashboards soon!

Machine Learning is the second-best way of doing anything

You already know the best way

You are the actual domain experts who have ingrained this intuitive problem-solving ability into your very fibers through significant effort over a long and sustained period of time. Machine learning without a domain expert is just that – a machine without intuition.

Change is coming and machine learning is your friend

In this competitive era of technology where expertise is just a few Udacity courses away and robots are just around the corner to steal our jobs, it is important to understand our true value. Yes, these robots can help. No, they cannot steal what you have taken years to cultivate and hone. Especially not if you don’t let them. Change is coming, and it’s not comfortable. However, it can make a positive impact on our life and a significant one at that. Machine learning tools are our partners, not our replacements. But keep in mind, they will shake us out of our comfortable reverie and onto progress. What does it mean if that manual report that takes 4 hours every week can be automated using an artificially intelligent agent? Does it mean that I’m now worth 10% less because I only need to work 36 hours instead of 40? Absolutely not. It just means that we can reallocate that time – imagine a company who just gained 10% of their time back to focus on anything else. This starts to explain the extreme competitive advantage of machine learning when companies use it the right way. And the right way involves a partnership between their domain experts and these machines.

Machine learning has been around for a while but it needs you

So far, I have referred to machine learning as an actual entity just like you and me. However, this is not true. Machine learning is a process with a deep mathematical foundation and has been around for decades. Although there have been some developments over the years, the core concepts remain. The good news is, we don’t need to know any of that. All we need to know is machine learning cannot produce productive outputs without the input of domain knowledge. And that comes from us.

The math works out

This importance of domain knowledge is deeply rooted within the mathematical underpinnings of machine learning. For example, Bayes Belief Networks is a method which relies on probability of events to predict outcomes. The accuracy of such a method is severely affected by inadequate domain knowledge since we cannot accurately gauge the current state before we can predict the future state. A domain expert however, can work with data scientists to get a better understanding of their world and therefore provide much firmer ground when attempting to predict an outcome. Bayes Belief Networks are still used prevalently in the industry for a variety of use cases not limited to spam detection, medical diagnostic systems, and even Clippy, our old and trusty Microsoft Word friend.

Not everyone needs to be an expert in everything

It all comes down to this – domain experts and data scientists must partner to create a system that works better than the sum of its parts. We don’t all need to become data scientists or take the hottest new course on edX. However, we must understand our value and the fact that machine learning allows us the ability to delegate tasks and push ourselves and our companies to new heights. Stay tuned. If you enjoyed this read, please like, comment, or shoot me a message! –Madhav Srinath

Artificial Intelligence – from Turing to Today

Artificial Intelligence is the key to harnessing the potential of the future. Articles detailing this have been circling the media universe for a while now but what is AI and why is it useful? To explore this beyond the oversimplified definition of “smart robots”, let’s start from the early origins.

The Turing Effect

from turing effect to machine learning Through the decades, there have been significant advancements with heavy AI subtexts and it’s difficult to pinpoint the exact conception. However, the true inspiration came from none other than the father of computing, Alan Turing. Alan Turing was relentless in his pursuit to uncover the potential for machinery to exhibit intelligent behavior. In 1942, he was charged by Winston Churchill with an impossible task of outsmarting the German Enigma machine. Different parts of the Enigma machine could be set up in different ways and each letter was dynamically enciphered as it was inputted based on those settings. This led to 15,000,000,000,000,000,000, (15 billion billion) possible combinations. Needless to say, a brute force method of going through every combination was not an option. Here come Turing and his team at Bletchley Park to the rescue. After a superhuman effort, they invented a device called the Turing Bombe that contained the exact wiring of 36 different Enigma machines. After receiving a message, it used logic to eliminate the large majority of combinations and then went through every remaining combination until it found a match. The Allied forces went on to be victorious as a result of their efforts and Turing was immortalized as a hero. Not only that, he was elevated in all technologists’ eyes as the parental figure who gave birth to the revolution that brought us the modern computer. Okay you have seen “The Imitation Game” and nothing about that screamed killer robots to you. How is this relevant? Turing was responsible for the fundamental paradigm shift that empowered technologists around the world to approach impossible problems in a completely new way. The significance of that victory imported upon us the limitlessness of a system comprised of an intelligent machine and a human being. Artificial intelligence was born as a symbol of growth, creativity, and collaboration. The potential of this new-found autonomy spurred brilliant minds all around the world to make remarkable breakthroughs in the field.

Artificially intelligent?

artificial intelligence
AI is a term used by many to describe systems that are intelligent in some way. This intelligence could be a result of fixed logic-based rules that have been painstakingly thought through to eventually come up with a device that could make decisions based on the parameters of the world. This device at its heart, contained a mathematical representation of the world that led to very specific bounds within which it could operate. For example, consider a world that we are trying to explain using the equation F(x) = 4x. We are saying that this equation takes an input of “x” and outputs the result of the mathematical evaluation of 4*x. If we inputted the number 2 and received an output of 8, we can rest assured that our mathematical representation is quite in line with the world it is in. However, what if this world expects an output of 10 for an input of 2? In this case, our mathematical model seems awfully rigid since it will always output a result of 8 for an input of 2 and will never be correct in the scenario where the output should be 10. Let’s look at this another way. Imagine that there are specific rules programmed into a robot that define its behavior. For example, anything it sees that has 4 legs is a cat and anything with 2 legs is a human. This rule applies when all the robot sees are cats and humans. However, what if the robot sees a dog? The robot would not recognize it as a dog and immediately uses the formula “4 legs = cat” to perceive the dog as a cat. We can see that the rules programmed into the robot are static and inflexible. They don’t allow the robot to venture outside the strict imposed bounds. Of course, the mathematics used in a variety of AI scenarios scale in complexity to be much greater than the examples above. The main takeaway is that there are problems in our world which require a robust underlying representation that explains past, present, and even future scenarios. Not only that, we want a model that can autonomously develop and get better based on varying experiences through its lifetime.

Machine Learning evolving from Artificial Intelligence

artificial intelligence progression over time
And thus, Machine Learning was born under the expansive umbrella of Artificial Intelligence. ML extended the vision that Turing had of a machine simply exhibitingintelligence to a machine that actively increased its intelligence based on the experiences it had. Just like a human. Now let’s go back to the potential realized at the origins of AI. The potential wasn’t of a machine growing so generally intelligent that it could live undetected in our complex human society. Rather, it was of a system that could get increasingly more effective as it experienced more of the world it is in. With the breakthrough of ML, humans were not the only ones who learned from experiences, machines could as well. The advances in ML have expanded upon this concept and have accounted for a variety of problems that complex models can represent and learn from. Deep Learning is a subset within ML that really explores the self-sustaining ability of an AI agent to learn from itself and use Artificial Neural Networks to mimic the thought processes of a human being. Deep Learning has many applications in the industry around speech and image recognition. We will explore this topic in greater detail in upcoming articles. There are many other subsets of ML including Supervised Learning, Unsupervised Learning, and Reinforcement Learning. However, the common element among these is the learning.

Good teaching leads to a machine learning

teaching machine learning
Consider a model created using ML methods. We have a set of experiences that can be fed to the initially naive model along with a set of what we expect the actual result to be. With the input of experience, the model output is then measured against the actual output and the machine learns from this outcome. Now, the machine has either positively reinforced its model from a positive outcome or negatively reinforced the model from a negative outcome. The beauty of this process is that the machine then goes into the next experience with this knowledge. We can see that this input of experience for the robot is critical for it to learn in the right way. Otherwise, the robot will not learn from experiences representative of the world around it, and subsequently perform poorly in actual scenarios. Let’s walk through an example with an online retailer that specializes in selling shoes. They have recently developed a chat-bot that is learning how to interact with a customer asking for advice on which kind of shoes to buy. We would naturally expect that chat-bot to be knowledgeable on the different kinds of shoes available. As an extreme example, if the robot was trained on hats instead, it won’t be able to answer any questions about shoes effectively. In a more probable case, if the robot was trained on shoes but on those that were going out of style or don’t align with the customer interests, that sale would probably still be lost since the customer would immediately disengage and go elsewhere. This is where it becomes very interesting for a domain expert to partner with the chat-bot and impart his/her knowledge akin to a manager guiding a new direct report. Along with effective human computer interaction methods, this is essential to delivering an enjoyable and relevant experience for the end user.

We never have to start from scratch

technology interconnections
Today’s world brings with it an amazing sense of collaboration among the many who seek to push the boundaries of AI. With complex mathematical algorithms already implemented and available online for all, these wonderful intellectuals focus on facilitating the utilization of AI in real business and personal scenarios. Data scientists and developers don’t need to reinvent the wheel every time and can start from a productive state to continue innovating with a solid foundation. Most importantly, domain experts can partner with these technologists to add specific business context to these products and emphasize relevance and maximize practicality. The user needs to be given a significant voice and domain experts know users the best. At the same time, it throws open the doors for everyone to think outside the box and deliver cutting-edge solutions to problems that have seemed impossible until now. Collaboration is the key. Stay tuned. If you enjoyed this read, please like, comment, or shoot me a message! –Madhav Srinath  

OMS without BI is like a Compass without a Needle

As customers demand and expect their products “anytime/anywhere,” companies are shifting focus to implementing best in class Order Management & Fulfillment software to compliment their ERP, WMS, and TMS systems. om software solution priority As you can see from the Forrester study above, companies will be continuing to roll out OMS solutions from providers like Manhattan, IBM/Sterling, and Aptos at an even greater speed in the coming months. My fear is that similar to the large WMS and TMS implementations of the past 10 years, business intelligence may be an afterthought leading to limited order, inventory, store, and performance visibility for most companies. These best in class OMS solutions are great at configuring and managing work flows with cutting edge algorithms. So the question now becomes, why not take full advantage of this great new system by leveraging the business and performance monitoring capabilities of business intelligence (dashboards, reporting, alerts, etc.)? Working with a broad range of companies (retailers, distributors, etc.), below are common key metrics that can be visualized and tracked in real time with a little upfront business intelligence planning (as part of a order management implementation). I have categorized these metrics into three key focus areas (Order Delivery Performance, Order Mix, & Store & Inventory Metrics). These metrics become even more powerful with a full view of your enterprise wide data (inventory, handling and transportation costs) from your WMS, TMS, and ERP systems for one version of the truth. Order Delivery Performance
  • Average order lead times and duration of late orders (order aging)
  • On-time delivery performance vs customer request time vs promise time
  • % perfect order & fill rate (total/line/units/$)
  • % orders shipped damaged free, with correct documentation
  • Order cycle times (internal and total)
  • % orders on time ready to ship
Order Mix
  • Order breakdown by channel & product category
  • % completely fulfilled by DC, % completely fulfilled by store, % mixed, etc
  • % product line customized and/or personalized
  • % of transactions for which available-to-promise (ATP) used
  • % Peak Orders
  • Backorders as a percent of total orders/lines/$/units
Store & Inventory Metrics
  • Average per day store capacity in units/
  • Average per day store utilization %
  • Inventory in process vs. store demand
  • Inventory demand/sales by store
  • Daily units/$ received by store by product category
  • Number and % of how an order was allocated and fulfilled – (i.e. went to store 1 and was cancelled there, went to store 2 and was cancelled there, finally fulfilled at store 3; number of hops).
This is by no means a comprehensive list, so please share some of the key metrics that you are tracking or plan to track as part of your OMS implementation. Tim Judge is President & CEO of Agillitics, a supply chain business intelligence and analytics firm based in Atlanta, GA

Keith Robbins, Director Supply Chain Program Management

Keith Robbins joins Agillitics Team as Director, Supply Chain Program Management

Atlanta –August 5, 2015 – Today Agillitics, LLC, announced that Keith Robbins has joined Agillitics as Director, Supply Chain Program Management. “We are all really excited to have Keith on board at Agillitics.  Keith brings a wealth of experience in managing very complex technology projects.  His leadership qualities are also a perfect fit with our continuous learning culture.” Receive up-to-date news directly from Agillitics on TwitterLinkedInFacebook.

About Keith Robbins

Keith Robbins is an experienced supply chain leader specializing in supply chain assessments, transformations, and implementations. He has over 17 years of project experience with the last 10 working in supply chain. Keith has worked with many international Fortune 500 multi-channel retailers, food distributors, wholesale distributors, 3PL providers and life science companies. Keith’s responsibilities include overall project management, solution/process design, training development/execution, testing management and application support. He has expertise in all Manhattan Associates software tools, as well as various tools developed by other supply chain and enterprise software vendors. Keith’s strengths include strong project management skills, communication and understanding the clients business to provide insights to supply chain-oriented businesses so they can make better-informed decisions about their supply chain operations. Keith holds a Bachelor of Science in Biology from Emory University and a Master’s in Business Administration (MBA) from Georgia Institute of Technology (Georgia Tech) in MIS and Finance.

About Agillitics

Agillitics is an innovative supply chain planning, business intelligence and analytics professional services firm.  The Firm works across industry verticals to help clients leverage their data to measure and improve operations, increase sales, and meet complex customer demands. For more information, please visit www.agillitics.com.

Supply Chain “Predictive” Analytics

In our last few posts we focused on the importance of bringing supply chain data into an Enterprise Data Warehouse (EDW) (http://bit.ly/SupplyChainED) and the value achieved (ROI) of doing so (https://bit.ly/2nqGv62). Staging and storing data enables essential descriptive and diagnostic analytics. Predictive analytics is a natural next step in the analytics maturity model.

Below is a visualization of the Supply Chain Analytics Maturity Model from Gartner. What type of analytics is your company currently implementing? We would love to hear from you.

Supply Chain Analytics Maturity Model

Supply Chain Analytics

What can I use Predictive Analytics for in my Supply Chain?

You are probably familiar with some of the more “classic” predictive modeling applications in the supply chain such as demand forecasting and transportation planning tools.   Advances in processing capabilities and maturing technologies such as NoSQL databases and Hadoop clusters enable us to speed up and extend these classic applications. Machine learning complements traditional linear programming and other algorithms to make “smarter” data-driven predictions. Below are some of the more popular uses for predictive modeling, analysis, and machine learning techniques in the supply chain.

• Order and Shipment Delivery Estimates

Tracking service levels is an important aspect of maintaining a customer focus. If predictive analytics can provide alerts to when orders or shipments will be late, account representatives can intervene to either notify the customer or help move the order along. Advanced algorithms take inputs from various sources (traffic patterns, carrier pickup times, weather data, etc.) to compare current events against historical measures and estimate service levels for order and shipment delivery. This provides opportunities to proactively influence customer service.

• Items that should be stored, shipped, and sold together

Understanding what items are sold together helps in product placement in the store and improves key store metrics such as average basket size, average ticket size, and overall store revenue.  In addition, similar association rules can be utilized to understand how similar items should be stored in the DC and replenished together to the store.  These algorithms are widely used outside of the supply chain combined with recommendation engines to make it simple for customers to add complimentary items to their basket at checkout.

• Item Obsolescence

Often companies get stuck with inventory that they need to write-off or sell at a significant mark down.  Predictive analytics allows you to identify items that are in danger of becoming obsolete based on similar products, historical data, and quick changes in demand.  Algorithms can notify Inventory and sales personnel of these SKUs and recommended to sell them through a promotion in another channel (e.g. online) or to send them to alternative stores to sell them quickly without a significant mark down.

Summary

As you can see from the BI Maturity Model below, after companies have had success with operational reporting, self-service analysis, and have successfully brought their supply chain data into a centralized data warehouse, they are well positioned to move on to the next level of predictive analytics, which is part of an advanced data mining strategy. BI Maturity with predictive analytics  

About Agillitics

Agillitics is a full service business intelligence and analytics consulting firm that focuses on supply chain systems. The Firm works across industry verticals to help clients leverage their data to measure and improve operations, increase sales, and meet complex customer demands. For more information, please visit www.agillitics.com.

Supply Chain Data Rich But Insight Poor?

In our last post, we looked at the key reasons for bringing supply chain data into your company’s Enterprise Data Warehouse (https://bit.ly/2vw1SHh). Today we will take a look at the potential value and the compelling ROI that companies can achieve by embarking on this type of engagement. Supply Chain data rich but insight poor  
  About Agillitics Agillitics is a full service business intelligence and analytics consulting firm that focuses on supply chain systems. The Firm works across industry verticals to help clients leverage their data to measure and improve operations, increase sales, and meet complex customer demands. For more information, please visit www.agillitics.com.

Why Supply Chain Data in Your EDW Is Crucial

In our last post, we focused on common myths about Business Intelligence (BI) implementations in the supply chain (https://bit.ly/2nnBw5N). Today we will take a look at why bringing supply chain data into your Enterprise Data Warehouse (EDW) is so critical and allow you to create a competitive advantage for your organization.

Quick Background

Most companies have a system of record (or ERP) that contains valuable information to run their business (e.g. financial, human resource, master data). Furthermore, most companies have an EDW that stores this data for long periods to provide the business with access to critical reports when they need them.

Why Should Companies Care?

If more companies were able to combine their ERP data with crucial transactional data from systems they are already implementing today such as CRM, SRM, and supply chain execution systems (WMS, LMS, TMS, OMS/DOM), they would enable comprehensive reporting across the organization and uncover valuable insights to drive supply chain efficiency and effectiveness.

One Version of the Truth

Combining ERP data with SCM data in a central place allows companies to measure various corporate and operational KPI metrics that link financial data to labor, freight, inventory, and order progress at a granular level. Having one view of the truth ensures alignment vertically and horizontally across the organization as well as with all external stakeholders. As an added bonus, you also save tons of money and time on unnecessary duplicate development efforts, confusion among departments, and accidentally negatively impacting your production system with risky reporting practices.

Driving Operational Excellence

Significant cost, profitability, customer service, and cycle metrics can be tracked and measured with the full access to historical and current supply chain data in one central repository. For instance, companies have found that they can now measure cost and profitability metrics across the full order lifecycle, inventory, freight, purchasing, labor performance, supplier, and customer channels. Furthermore, they can understand how they are performing compared to historical performance, their goals, and industry benchmarks. You literally have all the data you need at your fingertips when you need it.

About Agillitics

Agillitics is a full service business intelligence and analytics consulting firm that focuses on supply chain systems. The Firm works across industry verticals to help clients leverage their data to measure and improve operations, increase sales, and meet complex customer demands. For more information, please visit www.agillitics.com.