Unlocking the Power of Real-Time Data Across Industries

In today’s fast-paced digital world, real-time data is transforming how businesses operate, enhancing customer experiences, driving marketing success, and building trust. This dynamic data enables companies across various industries to immediately respond to customer behaviors and market changes, creating a competitive edge. Let’s explore how real-time data can be leveraged across different sectors.

About the author: Merinda Hillier, VP Marketing EMEA, at Tealium (one of the sponsors of the DDMA Digital Analytics Summit 2024).

1. Retail: Delivering Personalized Customer Experiences

Retailers are leading the way in using real-time data to enhance customer interactions. With real-time insights, retailers can track customer behavior across both online and physical stores. For instance, when a customer browses a product online, a retailer can instantly send them a personalized offer or recommendation through email or a mobile app. Additionally, in-store behaviors, such as lingering in a particular section, can trigger tailored promotions or assistance, boosting the likelihood of a purchase.

This level of personalization not only increases sales but also improves customer satisfaction. By understanding and responding to customer needs in real time, retailers can create a seamless shopping experience that feels personal and relevant.

2. Finance: Enhancing Risk Management and Customer Engagement

In the finance sector, real-time data is critical for managing risks and improving customer engagement. Financial institutions can monitor transactions in real-time to detect and prevent fraudulent activities. For example, if an unusual transaction pattern is detected, the bank can instantly notify the customer and take preventive actions, such as freezing the account or requiring additional verification.

Moreover, real-time data helps banks and financial services companies provide personalized financial advice. By analyzing a customer’s spending habits, income, and financial goals, they can offer tailored investment opportunities or budgeting tips in real time, enhancing customer engagement and trust.

3. Healthcare: Improving Patient Care and Operational Efficiency

Real-time data is revolutionizing healthcare by enabling more accurate and timely patient care. For example, wearable devices can monitor a patient’s vital signs and send real-time data to healthcare providers. If any abnormalities are detected, the healthcare team can intervene immediately, potentially saving lives.

Additionally, real-time data helps healthcare facilities manage resources more effectively. Hospitals can track patient flow and optimize staffing levels or bed availability in real time, ensuring that patients receive timely care without overburdening healthcare workers.

4. Manufacturing: Optimizing Production and Reducing Downtime

In manufacturing, real-time data is essential for optimizing production processes and minimizing downtime. Sensors on machinery can monitor performance and detect potential issues before they lead to breakdowns. This predictive maintenance approach, powered by real-time data, reduces costly downtime and extends the life of equipment.

Furthermore, real-time data enables manufacturers to adjust production schedules on the fly based on demand fluctuations. This agility helps companies meet customer orders more efficiently and reduces waste, contributing to a more sustainable and profitable operation.

5. Marketing: Supercharging Campaigns with Real-Time Insights

Marketing is another area where real-time data is making a significant impact. Traditional marketing campaigns often rely on static data, which can quickly become outdated. However, by using real-time data, marketers can create dynamic campaigns that adapt to customer behaviors as they happen.

For example, a customer who has just browsed a specific product online might receive an email or ad featuring that product within minutes, encouraging them to make a purchase while their interest is still high. Real-time data also allows marketers to test and refine campaigns on the go, optimizing for the best performance.

6. Building Customer Trust Across All Industries

No matter the industry, real-time data plays a crucial role in building and maintaining customer trust. When businesses use real-time data to respond to customer needs promptly, they demonstrate a high level of care and attentiveness. For example, a customer who experiences an issue with a product or service can receive immediate support based on real-time data, preventing frustration and enhancing their overall experience.

Moreover, being transparent about how real-time data is collected and used helps to build trust. Businesses that prioritize data privacy and security, while still delivering personalized experiences, can strengthen their relationships with customers and foster long-term loyalty.

Conclusion: The Strategic Use of Real-Time Data

Real-time data is not just about speed; it’s about making data-driven decisions that enhance customer experiences, improve operational efficiency, and build trust. Whether it’s in retail, finance, healthcare, manufacturing, or marketing, the ability to leverage real-time insights gives businesses a significant competitive advantage. By integrating real-time data into every aspect of their operations, companies can not only meet but exceed customer expectations, driving success in today’s fast-paced market.

For more information and detailed examples of how real-time data can transform your business, explore Tealium.

About the author:

Merinda Hillier

Merinda Hillier, VP Marketing EMEA, at Tealium (one of the sponsors of the DDMA Digital Analytics Summit 2024).

Clean Data, Clear Vision: Unlock the Power of Self-Serve Data and Analytics

Building and maintaining a strong self-serve analytics program is essential for top-performing companies. But not all businesses make it a priority. In conjunction with Women in Analytics, Amplitude hosted a three-part webinar series to tackle that issue and other challenges:

  • Demystifying Data Governance dug deep into why governance matters and how to get started or step up your efforts.
  • Unleashing Self-service Analytics shared the promise of making real-time data available to employees.
  • The Impact of Cultivating a Data-Driven Culture wrapped everything up with advice on gaining buy-in at every level of your organization.

Throughout these three discussions, led by Women in Analytics member Kristy Wedel, Learning Experience Manager at AlignAI, the speakers shared plenty of tips for startups and enterprises alike. Whether you’re looking to take an established program to the next level or struggling to get the resources you need, read on for expert advice.

About the author: Michele Morales is a product marketing manager at Amplitude, sponsor of the 2024 DDMA Digital Analytics Summit.

Speed and agility are winning propositions

Analytics isn’t new, but the way companies gather and use data has completely changed in the last ten years. Digital analytics, especially self-service analytics, can be the difference between growth or a slow descent into irrelevance. As Sumathi Swaminathan, VP of Engineering at Amplitude, says, “Being able to quickly adapt to changing business conditions and market dynamics is critical for companies to win in their space.”

Traditional business intelligence tools often require decision-makers to formulate questions and wait for data to inform solutions. Now, there is no need to choose between speed and information.

“Broadly, I think businesses now understand that speed is the only way to grow revenue, diminish competition, and stay fresh in customers’ minds.”
Sumathi Swaminathan, VP of Engineering, Amplitude

Self-serve analytics enable that agility—and not just at the leadership level. By understanding prospective and existing customer behavior, employees in departments ranging from marketing to product to operations are empowered to make quick, data-driven decisions. Self-serve analytics gives the answers directly to those with the big questions.

The best way to get started: Just do it

Many companies are intimidated by implementing a new self-serve analytics program, especially given the complexity of legacy business intelligence products.

“As you start making faster decisions . . . the value will become very obvious to the business.”
– Sumathi Swaminathan, VP of Engineering, Amplitude

Sumathi recommends a gradual transition plan that eases in new tools while enabling employees to continue using their old system. Offer training, set key performance indicators (KPIs), and launch pilot projects to show fast value.

Carolyn Feibleman, Principal Product Manager at Amplitude, adds that giving the change a positive spin can do wonders. Look to emulate your successful peers, using their data policies to inform yours. Competition can help motivate your employees to embrace your new system.

Analytics is an incremental process

It’s tempting to take advantage of everything a new platforms have to offer, but our panelists recommend launching an analytics program step-by-step.

Before working with data, champions can start by building data glossaries. Even if it’s not at a company-wide level, you can decide within your team what each product metric (user session, repeat visit, monthly active user, etc.) means. Share this glossary with your collaborators, and you’ve already started to build alignment on your company’s approach to data and analytics.

“Any of us who are really responsible for data and who care about where we’re going . . . anyone can start.”
Carolyn Feibleman, Principal Product Manager, Amplitude

Then, start with key questions your team wants answered—two or three pieces of information that can lead to quick wins. Delivering value to your organization will make your team feel proud of their work and prove to leadership that further investment is worth it. Once you have that first win, ask what the next urgent question is and use it to guide your next project. Build momentum by offering a series of small achievements and, before you know it, you’ll have a robust analytics program.

Data governance matters as much as data access

Data wins don’t happen without careful curation of your data, which is why Carolyn advocates for a strong data governance program. Successful governance results in organized data, alignment on who is responsible for data, and strong oversight to ensure your data is well cared for.

It’s easy to make a business case for a data governance strategy, especially considering the compliance issues at stake. Ensuring you’re collecting, storing, and using data appropriately is essential to risk management.

Carolyn points out that executives often assume IT or other technical teams should own risk management because they see it as inherent to a system. But she cautions against that viewpoint. Data breaches come back to the highest levels of leadership. A smart CEO is plugged into their company’s security practices and stays personally aware of potential risks and threats.

“A lot of us don’t think about the cost of having too much or duplicative data.”
Carolyn Feibleman, Principal Product Manager, Amplitude

There’s also the question of optimizing costs and utility when collecting and storing data. For example, data stored twice—or kept for too long—will cost more as it accumulates. And data that comes without metadata will be much less useful. A strong data governance program oversees these storage and labeling practices and multiples the return on investment (ROI) on your data collection efforts.

The three pillars of data governance

Morgan observes that a strong data governance framework has three parts: education, instrumentation, and maintenance.

  • Education refers to understanding what data to collect and store, and whether it complies with company policies and industry regulations.
  • Instrumentation refers to the consistency of data names and labels. Things like capitalization, use of underscores and dashes, and necessary metadata fall within this category. Standardizing your system at the start saves headaches in the future. Once you have a standardized taxonomy, you can start implementing it in your data tool.
  • Ongoing maintenance keeps your data useful and your company in compliance. Accountability matters, whether this effort is owned by a single team or a cross-functional data council.

“[There is] technical debt, and [there is] data debt. . . . We have to fix it. We have to clean it. We have to get it ready and structured.”—Morgan Templar, CEO, First CDO Partners

Always start by defining your goals

Every data program or initiative Amplitude VP of Value Marchelle Varamini launches starts with a plan. A clear objective gives each stakeholder a mission and prevents projects from floundering in the depths of their data collection. What should that objective be? Marchelle always looks to the KPIs that departments track.

“In any conversation and in any decision you’re making, make sure you’re anchored on the business reasons for the work you’re doing, versus it being the work you’re doing to define business reasons later.”
Marchelle Varamini, VP of Value, Amplitude

Marchelle likes to use value maps to start any discussion. She starts by asking what a business is trying to achieve—what it values. Next, she digs into any challenges that surround each value. That’s when she starts thinking about the metrics that can be used to solve them.

Effective data operations are collaborative, not isolated

Strong data operations include two types of collaboration: between employees in different roles on the same team and between different teams across the company. Combined data efforts are a sign of a strong data culture, so Marchelle likes to encourage cross-functional work.

One way to encourage effective collaboration across the company is by creating a team to coordinate collaborative efforts. Companies with mature data operations designate employees to think strategically about how the company will gather and use data, oversee governance, and bridge the gap between executives and employees when questions about data practices arise. Some have a centralized data team; others have a data council that calls in stakeholders from across the company. Either approach can work as long as the team understands the needs of every person who uses data.

For companies that don’t have teams dedicated to thinking about data, Marchelle likes to encourage employees to do this work. She asks questions to make them think about how others might use the same data they’re gathering. Every part of a company is interconnected, and most datasets have a secondary use elsewhere.

Building a data culture from the ground up

Most of us aren’t executives, so we can’t declare our companies will be data-driven and make it so. We have to lead from the bottom up.

As an employee and user of analytics tools, the best thing you can do is use data to tell the story of how you’re making a difference. Showing the impact of your work supports a larger data culture, which is why each of our panelists talked about building projects with milestones and celebrating small wins.

Of course, getting more resources for your grassroots program means winning executives over. Both Morgan and Carolyn agree the CFO can be a strong ally once you find a way to move the needle. Demonstrating a big ROI from a small initial investment will convince them these efforts are worth it.

You may face political challenges—sometimes, data will uncover failures that others won’t want to publicize. Other times, competing organizational priorities take the resources you were hoping for. These are hard to overcome, but on the flip side, data can identify weak spots and point to their solutions. Marchelle says she’s always found it easier to get help if she admits something is wrong. Additionally, data may help you see when two initiatives that seem at odds with each other are connected.

The only way to learn these truths is through experimentation, which takes us back to our second lesson: Sometimes, you have to become a champion and get things rolling.

Experimentation makes our work richer

As companies grow out of startup mode, they tend to lose the taste for curiosity. Some startups fail so fast, it becomes clear they were asking questions about a need that never existed. Data and analytics can help companies navigate both obstacles if they’re used thoughtfully.

Framing analytics as a scientific process, as Marchelle does, helps teams iterate toward growth. After all, if you don’t prove your hypothesis correct, the logical next step is to ask what other factors you might test. This mindset can also help failure-averse teams or corporations rekindle the habit of trying new things. There’s no such thing as a “failed” experiment. Even if yours didn’t turn out as expected, you’ll still learn from your data.

Over the past decade, we’ve seen the power of analytics firsthand as companies have used analytics platforms to navigate a shifting digital product landscape. If you set your company up with a strong data program now, you’ll be ready to keep up with the next decade’s changes.

Key takeaways

  • Self-service analytics are powerful for companies that want to be agile, and the best way to get started is by empowering teams to dive in with pilot projects.
  • Leading companies think about data governance early to reduce risk and ensure efficiency.
  • Questions connected to business value are the best starting point for meaningful analytics work.
  • Effective companies encourage collaboration so teams can understand each others’ data needs and work together to solve problems.
  • A strong data culture comes from both the leadership and the team level, and you can champion a data-driven approach from either.

Shall we not talk about AI for once? Likely your data set is not ready for it. Just not yet.

Did you happen to see the Netflix documentary about Fyre, the once in a lifetime luxury music festival in the Bahamas?

In case you missed it, the overhyped festival, a supposed mix between Coachella and Burning Man, ended up becoming “The Greatest Party That Never Happened”. Everyone was talking about it, it was overhyped, people bought tickets driven by “Fear Of Missing Out” and the organisers had little or no experience with organising such a large festival.

Why is this relevant for a blog on the website of the Digital Analytics Summit you might wonder? Well, there are actually many parallels between the current state of AI and the Fyre Festival. Everyone is talking about AI, it is a bit overhyped, there is a lack of experience and marketers are suffering from “Fear Of Missing Out”.

But there is no need to be afraid you will miss out on AI. Conferences like the Digital Analytics Summit and blogs like this one, give you valuable tips to help avoid making the same mistakes as others made before you.

About the authors: Suze Löbker and Harm Linssen are co-founders of Code Cube (one of the sponsors of the DDMA Digital Analytics Summit on 10 October 2024).

Recent data collection challenges

With online marketing being a significant and indispensable driver for traffic to any webshop, the importance of data is growing year after year. In the “State of Martech 2024” report by Chiefmartec 71% of the respondents (leading martech and marketing operations professionals), reported that they’ve already integrated a data warehouse within their martech stack.

We all acknowledge the importance of data, and we all have to continuously navigate between collecting valuable data on the one hand while still respecting the customer’s privacy and being legally compliant on the other hand.

In the recent past we have seen businesses migrating en masse to server-side tracking (whereas traditionally behavioural data was captured in the browser of the website’s visitor). The goal was clear, to improve data quality and secure it for the longer term because server-side tracking is not affected by adblockers and tracking prevention settings in browsers.

More recently, we have seen almost every website implement Google’s Consent Mode to ensure GDPR compliance while still being able to track valuable data for insights.

Is your data tracking ready for AI?

AI models rely solely on clean and complete data flowing into them. Without correct data, any model lacks the fuel to deliver an intelligent prediction.

Therefore your data collection process (which is the foundation of any data-driven strategy) should be your top priority. Without validated and rich data, you will never be able to be successful with AI.

So in order to ensure the effective execution of marketing AI you should never forget that garbage in, results in garbage out. This applies to the simplest dashboard in which you monitor basic KPI’s and even more so to AI. Therefore, before starting with AI, you need to make sure your data tracking is in order and works permanently. The DataLayer and all tags must do their work well at all times in order to capture- and pass on the right data.

When your tracking works perfectly, it will have a significant positive impact on the costs of your first AI project. The correct tracking setup contributes to the data quality and structure and it will eventually save your data scientists lots of valuable time. Instead of wasting time on checking, cleaning and preparing the collected data they can actually spend more time on developing and improving the AI models.

A well working- and well documented tracking setup contributes to transparency and helps explain the AI model’s logic and its predictions to stakeholders or even customers when needed.

Real-time monitoring: let the AI party begin!

There are some tools available in the market that help you with checking your data collection set-up. What most of these tools don’t do however, is real-time monitoring of the most vulnerable parts in your data collection.

You don’t want your artificial intelligence capabilities to be dependent on the need for human checks. To boost your AI efforts, you will be far better off with a real-monitoring tool which alerts you instantly when something is off and also states in detail what the actual problem is.

Monitoring your dataLayer, tag manager and tags in real-time ensures a constant stream of reliable data at low costs:

  • AI will be working at full speed and full power without any required human interference;
  • There is no longer any need for periodical and time consuming manual (human) checks of the tagging setup;
  • When new code is conflicting with other content or objects it will be instantly detected;
  • Valuable time for debugging is saved by automatically pinpointing the exact cause of an issue and reverse engineering the setup;
  • You will have full control over the tracking on your platform and maximise the quality of the data you collect.

Conclusion

Despite the many challenges and potential pitfalls, the successful implementation of marketing AI is surely feasible.

Data quality is key because any AI model needs good quality data. It is the fuel needed to deliver intelligent predictions. High quality data will save time and

resources for data preparation and maximise the chances of success with AI. Therefore, data quality assurance and correct tracking should be your first priority.

With real-time monitoring of your tracking set-up you have a permanent safeguard in place to protect your data quality. Real-time monitoring of your data collection process will pave the way for being successful with AI.

About the authors: Suze Löbker and Harm Linssen are co-founders of Code Cube (one of the sponsors of the DDMA Digital Analytics Summit on 10 October 2024).

You don’t know what you got, till it’s gone: Why you need to monitor your tracking- and tagging setup

It is probably the biggest nightmare for online professionals, decision makers and data analysts: unreliable and incomplete data.

Within any organisation, data is the foundation for strategic decisions, solving problems and the ability to run and assess marketing campaigns. We are increasingly dependent on the quality of the collected data to stay relevant and competitive.

Moreover, running online marketing campaigns relies for an increasing part on trustworthy data. Algorithms which are used by for instance Google, Microsoft and Meta work more effectively when you feed sufficient and correct data into their systems.

Therefore, it is no surprise that data collection is in the spotlight for quite a while already. Already in 2019 the “tracking rat race” between Safari and Mozilla on one hand and marketers on the other hand was in full swing. Since then, many episodes have been added to the saga with most recently the launch of iOS17 and the removal of utm-tracking parameters. With the introduction of new technologies like server-side tagging and the introduction of Data Warehousing the landscape is getting more technical along the way.

Bad or missing data gives a wrong image of your return on investment, algorithms can’t do their job well and ultimately you are wasting part of your advertising budget. Despite all the attention for data collection, the sad reality in most companies is that stakeholders don’t even know if and when tags are firing or not. Most websites have extensive monitoring in place for their website infrastructure, but the functioning of tag containers and tracking tags are frequently overlooked while they have a significant impact. Correct tagging and tracking, a crucial aspect for collecting consistent and accurate data, is almost always overlooked.

Tracking threats

  1. Tag managers allow marketers to inject code into your website without the help of a developer. At the same time those tag managers don’t have a safeguard in place to stop new code from conflicting with other content or objects. You will not be alerted when tags are not loading, malfunctioning, causing errors and conflicts or slowing down your pages.
  2. Frequent updates on the site as well as third party technologies potentially cause errors which result in a collection gap of several days up to a few weeks. Issues don’t get noticed till it’s too late and the irreparable damage to the data and data quality has already been done.
  3. Dependencies and risks are getting bigger over time due to a growing number of technology partners and tags, each forming a potential point of failure.
  4. A lack of process, lack of knowledge and lack of communication between teams, departments and external agencies poses a risk. This risk is even bigger for companies with multiple domains and larger teams.
  5. Manual online checks of the tagging setup are time consuming and are not consistent. The results are influenced by for example the time of the day, the location, the device and browser. It’s impossible to test all edge cases.
  6. You can’t influence everything when it comes to tracking and tagging, for example the endpoint could be offline or an API could be malfunctioning without you knowing.

Everyone has the same struggle

Sometimes you find out by coincidence you have been missing conversions for a specific period and you can only guess how many you have missed based on the numbers of the previous period. During this time, you have annoyed all those buying customers with irrelevant retargeting ads. This is money wasted on annoying your customers.

You are not alone in this struggle. Most websites are not being protected against the tracking threats described above. An average website regularly encounters issues with error percentages of 5% up to 25%. Basically, you and most of your competitors currently have no oversight or insight into how tags are impacting data collection and the user experience of your visitors.

In the land of the blind, one-eyed data collection currently still is king

You never get a second chance to capture the relevant data of your website’s visitors. Therefore it is fundamental that your tagging- and tracking setup is done in a manner which guarantees a steady and uninterrupted process of data collection. You need to put in place safeguards, which alert you realtime when shit hits the fan. This will allow you to act accordingly when something is happening with your tagging, whatever the cause may be.

Currently the one-eyed man can still be king. But what if you could be the two-eyed emperor? You can have full control over the tracking on your platform, maximise the quality of the data you collect and your marketing tagging will no longer negatively affect user experience.

Besides making sure your dataset is up to par, this will save you a lot of time not having to pinpoint the exact cause of the issue and reverse engineer the setup.

It is possible with real-time monitoring of your tracking and tagging. Automated monitoring ensures a constant stream of reliable data at low costs. Reliable data tracking starts with real-time monitoring and it is the foundation on which any data-driven strategy should be built.

Harm Linssen & Suze Löbker co-founders of Code Cube

About the authors:
Harm Linssen and Suze Löbker are co-founders of Code Cube
(one of the sponsors of the DDMA Digital Analytics Summit on October 12th).

Harnessing First-Party Data: A Balance of Knowledge and Trust

Data has always been an invaluable asset, and this won’t change in the foreseeable future. On the contrary, due to the many developments in the online world, data is becoming even more indispensable. In today’s world, collecting data first-party is becoming an industry standard, which essentially means the user and/or customer data that you collect. But why is first-party data so important and how can it be applied? Well, those questions are about to be answered in this article.

About the author: Steven van Eck is Web Analytics Specialist at SDIM (one of the sponsors of the DDMA Digital Analytics Summit on October 12th. On this day we’ll indulge ourselves in all things digital analytics. Will we see you there? Get your tickets at: shop.digitalanalyticssummit.nl.

Cookieless Era

The “Cookieless Era”, a topic that has held the data community in its tight grip for the past years. While the industry is moving towards this change, achieving a completely “cookieless” environment remains somewhat elusive. Third-party cookies, however, have felt the impact, with numerous browsers actively shielding against these types of cookies.

The disappearance of third-party cookies has had big consequences for online businesses and marketeers. While collecting valuable data is becoming increasingly difficult, targeting relevant audiences is not getting any easier either. Using third-party data from platforms such as Google and Facebook for lead generation or targeting is no longer the obvious choice. You now have to come up with other creative ways to collect and apply data, especially when it comes to determining and achieving your business objectives. Collecting first-party data is now a crucial step in this process.

Ownership of your data

By collecting and storing first-party data, you truly take matters into your own hands. You become the owner of the data you collect and are no longer dependent on third parties like Google or Facebook.

  • Independence from third parties: In case you currently have data stored at third-party organisations, you will always be dependent on those parties for processing and accessibility of that data.
  • No more splintered data: Another great benefit of ‘owning’ your data is that you can store it in one location that is accessible to you at all times. Moreover, this can save you quite some time by eliminating the need to transfer and translate the data from other platforms into one report. One of the most ideal solutions to this is data warehousing.
  • Retaining raw data: By hosting your own data storage solution you can determine how you save and process this data. You can keep it in its rawest form without the need for sampling to lighten the load. Additionally, any miscalculations or -communications about the results can be prevented, as the source material is always within arm’s reach.
  • Determine the expiration date: Data longevity often varies across platforms and tools. Through data warehousing, you can determine the lifespan of your data – whether it’s two months or two years, it is all up to you.
  • Ensuring Privacy and Security: By setting your own parameters for data processing and storage, you assume the responsibility of respecting user privacy, complying with the GDPR (and the Dutch AVG), and the prevention of data breaches.

The importance of strategy

Collecting first-party data in itself is not a simple goal. You have to prepare beforehand and take into account a number of things. To help you prepare, ask yourself the following questions and see if you can provide a sufficient answer:

  • What first-party data do I want to collect?
  • How will I collect this data?
  • Where will I store it?
  • How will I ensure GDPR-/AVG-compliance?
  • What is the purpose of this data?

In other words, don’t just start collecting first-party data, but create an initial ‘data strategy’ that falls in line with not only the values of your organisation, but also its objectives. You want to prevent missing out on important data or creating security risks due to the lack of a proper plan – handle with care! Creating a first-party strategy as a foundation for future online endeavours will most certainly help you in the long run, even if it’s just to keep things organised and prevent any mishaps from happening.

The meteoric rise of Artificial Intelligence

Another topic that cannot be excluded from today’s conversation is Artificial Intelligence (AI). In the past year, the developments around AI have skyrocketed, to such an extent that it is virtually impossible to stay up-to-date with all current and future innovation.

Without data, AI wouldn’t be where it is today, let alone even exist. Behind every AI solution exists an algorithm that has been trained by processing often massive amounts of data. One of the most well-known examples of this, is of course ChatGPT, which uses data from the internet to train and develop its algorithm. An AI solution doesn’t always need public datasets in order to learn. If you’re trying to create an AI that needs to provide you with relevant data, the first-party data you have collected will be its most important ingredient, as this data is unique to your situation.

There are a plethora of conceivable applications when it comes to Artificial Intelligence and using your first-party data sets as the primary source – for example:

  • Analysing and segmenting your customer base.
  • Providing relevant up- and cross-sell recommendations.
  • Predicting customer lifetime value and churning.
  • Creating personalised experiences for previous customers/visitors.
  • Identifying fraudulent behaviour by tracking suspicious activity in your own data.

These are a few examples of the many possibilities in which AI and first-party data can be leveraged as quite the dynamic duo. The coming years will definitely provide us with even more exciting new and unique applications of AI.

From raw data to marketing

Through first-party data, reaching your target audience has become even more interesting, as the type of data you have collected can grant you various outcomes. On your own website, first-party data can be leveraged to create personalised experiences, as mentioned before. Users who land on pages that are adapted to their personal interests are more likely to trigger a conversion action.

Another great way to leverage your first-party data is by using marketing automation. Through marketing automation you can target specific audiences with relevant messages. For example, an automated follow-up email that is sent three days after a visitor has downloaded a whitepaper. While email marketing remains a popular choice for marketing automation, there are other alternatives. Another quite underrated method is text messaging — not the WhatsApp variety, but traditional SMS. Others include (native) site pop-ups and sending notifications through apps, social media or other channels.

First-party data also enables targeting audiences through advertising platforms such as Google and Facebook. These channels allow you to upload the collected data, subsequently granting you immediate access to your desired target audiences within that specific platform. Moreover, you will also be able to reach audiences that share traits of your primary one. It allows for a much more creative yet relevant approach to your marketing activities, which could (very likely!) increase your chances for a higher conversion rate as a result.

Independence, innovation and trust

In the digital world, data plays an increasingly prominent role and the value of first-party data will continue to grow. Collecting and governing data over your own visitors and customers not only offers more independence and control, but also leads the way to personalised experienced and innovative applications such as Artificial Intelligence. Proper data collection and effectively leveraging first-party data will be important cornerstones for the future of data.

In conclusion, the power of data is undeniable, that much has been proven. However, let’s not forget that without consumers, we would not have any data to collect. For that specific reason, please be mindful of the consumer and be transparent with any questions regarding the processing and application of their collected data. If you manage to earn the trust of your customers, the chances are more likely that they would be willing to share their data, and of course, this relationship will always involve a balance of give and take.

The importance of data quality for a data-centric way of working

Based on a recent 2023 Statista study among loyalty customers worldwide, the top purchase drivers are: a plentiful range of products (81%), product availability (80%), data privacy policy (79%) and good customer service (77%). Evidently, having a precise understanding of the customer experience holds great significance. To effectively monitor your customers’ experiences and to make well-founded decisions, you need reliable data for information and insights. Adopting a data-centric way of working is the means to accomplish this.

Ultimately, data provides you and your teams with information and customer insights to improve customer experience. Gaining the ability to perceive your actual customers’ behaviours through data, and leveraging this knowledge to make well-informed business decisions that refine their customer journey, requires placing a 100% reliance on data Adopting a data-centric approach where everyone places their trust in customer data is essential. This article by Jesse Terstappen (OrangeValley, sponsor of this year’s Summit) delves into several prevalent risks associated with upholding data quality, the necessary prerequisites for maintaining it, such as effectively managing data privacy and compliance, and the criteria for establishing a data-centric organization.

The most common data quality risks

  1. Accidentally including personally identifiable information (PII), such as email addresses of your paying customers in the URL parameters. Violating GDPR legislation with fines and reputational damage is a result. Not very common, but comes with an extremely high risk. We spotted this once last year.
  2. Important decisions cannot be fully supported with actionable information due to a loss of analytics measurements after a development update. A gap in the data (a week or even longer) is a common practice at multiple organizations. And as you can imagine, this makes it harder to rely on long term analyses, since you will not be able to draw any conclusions based on year-to-year analyses in the same time period. We see these challenges appear on a quarterly basis.
  3. Overload of Data and Tracking Methods: With the abundance of data available today, it’s easy to become overwhelmed by the sheer volume of information. It’s important to prioritize collecting data that aligns with your organisation’s goals and KPIs. Additionally, having a streamlined approach to tracking methods can help ensure consistency and accuracy.
  4. Attribution Models and Discrepancies: Different advertising and social media platforms often have varying attribution models, leading to discrepancies in reported numbers. Establishing a standardized attribution model or understanding the differences among platforms can help mitigate confusion. Regularly auditing and cross-referencing data from different sources can also help identify inconsistencies.
  5. Lack of Consistency in Reporting & Defining of KPIs: Consistency in reporting is crucial for building trust in data-driven decisions. Having standardised reporting templates, guidelines, and protocols can help ensure that data is interpreted consistently across departments. Ideally, key performance indicators (KPIs) and tracking methods should be defined consistently across the organisation. This promotes a unified understanding of goals and ensures that everyone is working toward the same objectives. However, it’s important to allow some flexibility to adapt to specific departmental needs while maintaining alignment with the overall organizational strategy.

Mastering data privacy and compliance

Proper configuration leads to privacy and compliance. Incorrect privacy settings can cause the unintended collection and storage of sensitive data. Mastering data privacy and compliance is ensuring that privacy settings are configured and maintained correctly to comply with applicable legislation, such as the GDPR. You of course want to avoid fines and reputational damage. Using a GDPR Monitor, including a dashboard featuring real-time alerts can assist with this by automatically overseeing the presence of personal identifiable information (PII) in the data.

Who has access to what personal identifiable information (PII)?

A crucial aspect of compliance involves using user permissions accurately and appropriately managing access to GA4 data. Mishandling user permissions and access controls may lead to unauthorized access to GA4 data. It’s imperative to confine access strictly to authorized users only to prevent sensitive data from falling into the wrong hands. This helps to prevent data breaches and avoid potential legal consequences.

The importance of continuously collecting, tracking and storing valuable data

A proper analytics implementation is of course required to collect and store data to become information and finally insights. On the one hand, we see that a lot of data is labelled ‘invaluable’ due to the incorrect way the data was gathered. Knowing you only have one chance of collecting data, the way you collect data should be correct. Maintaining is another necessity. For instance, we often see that historical information is no longer available due to an incorrect setting of the retention period. Now trends can no longer be spotted and valuable analysis is no longer possible.

A 6-stage data model to handle your data
You can utilise the 6-stage data model below to ensure proper data handling, thereby acquiring valuable insights that serve as the foundation for your actions. The initial phases of this model encompass establishing and maintaining data collection, as well as managing data storage. These initial steps, involving data gathering and storage, are of utmost importance. The subsequent stages—’Transformation,’ ‘Visualisation,’ ‘Analysis,’ and even ‘Activation’ of data—rely heavily on these foundational data aspects: the proper setup and maintenance of data collection, as well as accurate data storage with appropriate retention periods.

How to maintain data quality?

Since analytics platform vendors constantly change their existing features on their platform or launch new ones, organisations often struggle with an outdated analytics set-up. This renders valuable data useless, leading to a squandering of data investments. To counteract this issue, an automated data quality monitor diligently oversees your analytics configuration, providing real-time notifications to the team when adjustments are necessary. This mechanism guarantees the upholding of stringent data quality standards at minimal expenses.

How can you minimize data loss?

With the help of a data quality monitor, you’ll be able to automatically compare trends in today’s data with those from the previous day. Comparing your day-to-day data gives you critical alerts, enabling you to identify instances where a conversion (former goal completion) has been altered due to changes on your website. Comparing day-to-day traffic data to, for example, flag tagging issues, can then be fixed directly. It also ensures the seamless flow of qualitative data into your data storage location. Subsequently, the process of transforming, visualizing and analyzing data can begin.

How to do a reliable data analysis: the famous ‘360 customer view’

First of all, in order to do a reliable data analysis, you first need to make sure events are set correctly and filters are configured accurately to ensure reliable reports. Misconfiguration can result in inaccurate data and analysis when certain events or traffic for example is excluded. This can lead to wrong conclusions, poor decision-making and missed opportunities for improvement.

Moving forward, additional focal areas of significance encompass understanding the distinctions between universal analytics and GA4, navigating the intricacies of varying reported conversion figures, and constructing attribution models.

The difference between Universal Analytics and GA4 output

We all have seen the differences between the output in Universal versus GA4 analytics. These differences result in a decrease in the data trust among our colleagues; the people we want to convince of our analytics insights. There also is a difference between the data shown in the GA4 interface and the raw data. Although Google might say that they’re showing you all the data, GA4 is not showing you 100%. The reason behind this is the focus on speed. Google wants to compete with other analytics platforms based on loading time in the interface. One technique they employed for achieving this, involves session estimation. This is based on a smaller subset of the data. This also explains the differences between UA and GA4 output.

Why do reported conversion digits on social platforms differ from those in your analytics platform?

You’ve might have noticed the differences in how conversions are attributed to paid advertising or social channels. For instance, why does TikTok report a higher count of conversions than your analytics platform? META is also an often heard name within our agency when differences in conversion reports are discussed. These discrepancies stem from the underlying business model of the advertising and social platforms. They profit from a higher number of conversions. How the conversions are assigned to the platform and why the number of conversions differ is because of attribution. Different methods are used that assign credit to various marketing channels or touch points along the conversion path. GA4 now uses three different attribution models:

  1. Standard channel group for new users: First Click
  2. Standard channel group for sessions = Last Click
  3. Standard channel group for conversions = Data Driven

Build your own attribution model

You can put your organization in control over the challenges posed by differences between attribution models by creating and managing your own attribution model. If you want to make full use of all available GA4 data, making use of BigQuery can be a viable option. Trough the integration of the BigQuery plugin, the raw data can be used. With the help of SQL, your team can reproduce the reporting options available in GA4 and even customize it. This makes it possible to define and use rule-based marketing attribution models, using logic that you own and can change. This puts you in the driver seat of attribution!

What are the conditions required to transform to a data centric organization?

While data-driven digital marketing focuses on using data as a tool, data-centric digital marketing goes a step further by viewing data itself as a valuable asset. It means seeing data as an essential business asset that is central to making decisions and developing marketing strategies. Collecting, storing and managing data is key to gaining valuable insights into customer behavior and trends. A data-centric approach is essential for organisations that want to grow and compete in a digital environment. By seeing data as a valuable asset, companies can differentiate themselves from their competitors and gain valuable insights that lead to effective marketing strategies and a better customer experience.

The four fundamental aspects for a data-centric organization

  1. Maintenance: Data quality is set-up properly and maintained constantly;
  2. Knowledge: Uniform understanding among all company stakeholders regarding tracked elements and the significance of various metrics;
  3. Application: Every employee knows how to use data whenever relevant and possible;
  4. Trust: Fostering a sense of confidence and reliance on data throughout the organization.

An instance illustrating how our client assists all stakeholders across the organisation in comprehending both the tracked elements and the various metrics is through the utilisation of a ‘KPI catalog.’ This catalog encompasses all triggers and definitions of measures presented in a comprehensible language for all stakeholders within their organisation.

Conclusion

Research underscores the paramount importance of data quality in a data-centric approach to business. Understanding customer preferences, driving decisions, and enhancing customer experiences depend on accurate and reliable data. Aspects such as data privacy, consistency, and proper setup play vital roles in maintaining data quality. Organisations must establish strong data fundamentals, automated quality monitoring, and reliable analytics implementation to navigate the challenges and unlock the benefits of a data-centric approach. Trust, knowledge, application, and maintenance are the cornerstones of such a transformation, enabling effective decision-making and superior customer engagement. Reliable data and employee data trust are fundamental for building thriving data-centric organisations in the future.

Author: Jesse Terstappen, Data Analyst, OrangeValley

The Shift in Online Data Collection

Our view on data and its governance has shifted over the recent years, and it is different from what we were used to, with privacy being the main driver of this shift. The time of placing cookies and saving (personal) data without clear communication and consent is over. The user and their privacy have been top priority, and this has to be respected. The many  changes that are currently taking place in the digital landscape have made this apparent.

This article is written by Steven van Eck, Data & Analytics Manager at SDIM Online Marketing. SDIM is sponsor of the DDMA Digital Analytics Summit, which recently (October 13, 2022) took place in B. Amsterdam. 

Besides new legislation and guidelines on privacy, new technical changes derive from this issue as well. One of the biggest changes in the digital landscape is the impending disappearance of third-party cookies and the set limitations of first-party cookies. The outcome of these developments and its consequences have been dubbed as the ‘Cookieless Era’ among digital marketers and analysts. This Cookieless Era has forced us to become not only critical of the way we have been collecting data, but also of the kind of data that we collect.

The aforementioned developments have led to a variety of changes in the market and have presented a multitude of changes that are yet to come. Our ‘need’ to collect as much (revelant) data won’t disappear, it’s just the rules that are now different. Logically, shifting the rules also requires its players to adapt different behaviors while simultaneously ensuring a minimized risk of ‘unwelcome’ consequences.

It is crucial that we remain mindful of this changing landscape by adopting critical mindsets. This will be the absolute key in order to stay ahead. Thus, this article will discuss a number of important topics that overlap with these developments.

Server Side Tracking

The current method for collecting data involves placing and firing a script that is placed on the website from the browser of a user or visitor (Client Side). Despite it being an industry standard, it does involve numerous drawbacks. One includes its sensitivity to adblockers: a user that has installed an adblocker in their browser can easily prevent their data from being shared with parties such as Google or Facebook. This limits the amount of data an online advertiser can collect. Another big constraint of Client Side tracking is the lack of control over your data and what information is being sent to third parties. Which makes sense, as you are placing a third party script on a website, and how often do we fully comprehend its functionality? Do we always know exactly what data is being collected?

A popular alternative that has been (and currently is) making waves, amongst both smaller and larger parties, is Server Side Tracking. With this new technique, a script is no longer fired from the user’s browser, but from an external server (which is governed by the website’s owner). This allows for both faster site speed and the circumvention of adblockers. Furthermore, because the data is being collected from a personally owned server, you gain full control over the tags you fire and the data you collect. You even prevent third parties from collecting data on your website without your permission, which could occur when placing third-party scripts Client Side. Data collected might include personal user data that is shared while filling in a digital contact form for example. Server Side Tracking would prevent this issue, in other words: you are in full control of your data and its collection.

From Third to First-Party Data

With the disappearance of third-party data, the need to collect first-party data naturally grows as it remains an invaluable asset to any kind of business. There currently are a number of browsers that already block third-party cookies by default, with Safari and Firefox being prime examples. Nevertheless, the browser with the biggest market share, Google Chrome, still allows the use of third-party cookies and has said it will continue to do so until 2024. Once they block third-party cookies by default, the ‘Cookieless Era’ will officially begin.

As mentioned before, our focus will shift from third- to first-party data. Creative solutions to tracking and data governance will become incredibly important in order to maintain a steady flow of data collection. We have seen quite some different solutions already, mainly in the shape of using email addresses. These include:

  • Log-in screen to read content
  • Newsletter subscriptions
  • Downloads of white papers or brochures
  • Quizzes that use email addresses to share results
  • Discounts in exchange for email addresses

Another solution is the collection of data first-party in order to use it for different purposes. These include:

  • Marketing automation: send personally customized newsletters to potential leads based on their position in the customer journey.
  • Website personalization: show personalized content depending on the profile of the visitor in question.
  • Advertising channels: email addresses can also be uploaded to platforms such as Google and Facebook for the purpose of advertising. This data could then be used for remarketing purposes or to reach similar user profiles.

An important thing to keep in mind is that with both collecting and using first-party data you need to be fully transparent towards users about your conduct. They have to be aware that their data is collected and used for other purposes, so that they can decide whether they agree or disagree with this.

Developments in Web Analytics

All current and upcoming developments also influence web analytics. Web analytics platforms mostly use (first-party) cookies to recognize visitors. Shortening the cookie lifespan will have a significant impact on the results we see in tools such as Google Analytics. If visitors cannot, or for a short period of time, be recognized, it will be increasingly difficult to map out a complete customer journey. Moreover, attributing traffic sources and channels won’t be possible in the same way that we are (or were) used to.

To fill these gaps in data, new techniques are used to maintain a similar way of collecting web statistics. For example, the necessary applications of machine learning and data modeling are becoming increasingly common. A prime example of this would be Google’s new Analytics platform, Google Analytics 4 (GA4). This new generation of Google Analytics has integrated said techniques to combat issues such as the loss of data, which was necessary as Google Analytics remains a market standard despite its numerous adversities this year alone.

With its machine learning algorithms, GA4 contributes to developing and recognizing important insights and trends. For example, when a decline in visitors from a specific campaign occurs, GA4 could potentially present this to you as one of its Insights. Moreover, it has predictive metrics that can calculate the expected revenue your visitors generate. This data can subsequently be used in your Google Ads campaigns to target users that potentially generate a higher revenue.

Machine learning also contributes to data modeling. For example, when users deny consent for analytics-cookies, that data will be unavailable to you, resulting in these so-called ‘data gaps’. In order to close these gaps, data modeling is applied by using data from comparable visitors that did consent to analytics-cookies. This way, you respect the agency of your visitors while simultaneously preventing you from having to deal with a loss of data.

The possibilities regarding machine learning are still quite minimal, as these functionalities are relatively new. We expect that developments regarding these functionalities will be introduced in the coming years.

Big Ad-tech developments

Big Ad-tech parties can actually thank their existence to data and, in today’s environment, are primarily concerned with new developments concerning data collection. This resulted in many new functionalities that support collecting more relevant (conversion)data and some examples of these new functionalities are:

  • Facebook Conversion API: a new technique developed by Facebook to collect and send data to Facebook. Data is sent from server to server without being reliant on client side tracking, which is becoming more difficult due to the disappearance of third-party cookies and the limitations applied to first-party cookies. With this server-side solution you can circumvent these obstacles. Moreover, you have more control over the data you sent from server to server.
  • Google Enhanced Conversions: with Enhanced Conversions from Google you include additional user data when registering conversions, for example, a user’s email address or home address. The data is then sent in hashed form to Google to ensure user privacy. This additional data will assist in matching a visitor to an existing Google account that previously had interaction with an ad. It allows for extra conversions to be attributed to users who use multiple devices.
  • Google Consent Mode: this new functionality from Google is used to collect information regarding the cookie-consent provided by the user, respecting their choice whether they want cookies to be placed or not. For example, if a user were to refuse giving consent, only a limited amount of data will be collected from their visit – data that will not require cookies. With the use of Google Consent Mode you respect the user’s agency while simultaneously allowing you to collect ‘cookieless’ data.

We expect that the development of new data- and advertising-related technologies will be ongoing in the coming years. Furthermore, they will surely play a big role in the online landscape.

Respect the User

One thing is certain: the past months have been everything but boring and we have many more exciting things to explore in the (near) future. Many new data- and privacy-related developments await us, as data will remain a vital part of our daily activities, even if we have to abide by the rules. These rules are set for a reason, despite how important data is, the security and privacy of our visitors need to be respected. Always be clear and transparent when it comes to data, what information is gathered from them and the potential application of this data for other purposes.

Don’t focus too much on what you won’t be able to track, but focus on the new possibilities, advancements and technologies that allow us to grow even more. This is a unique opportunity that is presented to us, one with many challenges. But, instead of running away, we should face them head on, as there are many exciting things waiting for us in our data-driven world.

About the author: Steven van Eck is Data & Analytics Manager at SDIM Online Marketing.