Challenging the Feasibility of Digital Agency; A Bygone Era of Online Privacy or Stepping Stone to a New Understanding?

Artificial Intelligence, or more commonly known as AI, we can all admit that it would be virtually impossible not to have heard these two words, or letters, used in conjunction before at some point in our lives. A concept explored in many forms of fiction, but almost always to a ‘doomsday’ extent, and rarely in a more ‘everyday and realistic’ way. For a long time it didn’t seem like AI would be accessible to the masses, and yet, this perception has changed drastically over the past years as its adoption rate in all its forms has been nothing short of unbelievable. Seemingly overnight did it become a part of everyday life, both privately and professionally.

It might be safe to say that anyone who works in digital marketing like I do uses it on a daily basis, whether it be helping with writing code, creating visual content, and even for articles like these, AI is an incredibly useful tool. No one can deny that. Knowing you will always have someone (or something) to rely on for bouncing your ideas off of is reassuring, regardless of the sometimes hallucinating nature of such tools – which of course is a great way to remain sharp and solidify the importance of a human brain – it is easy to walk the line of AI being a useful extension to your work and letting it completely take over your work without giving it much thought. And now with AI being integrated into almost anything digital, often without giving us much of a choice, it does raise a number of questions, for example; how aware are we of the inner workings of AI solutions? Who actually uses the data it collects? And for what? And how is data stored? Is it as safe as promised?

Let’s face it, the (business) models are fundamentally vague, and maybe deliberately so, but its users and consumers are not completely off the hook either. Should we even use these tools if we do not fully comprehend what we’re ‘signing up for’? Or are the benefits simply too good to pass up on? We claim to highly value our privacy, yet our behaviour suggests we readily trade it for convenience. So, is online privacy and security still an attainable goal in an era of pervasive AI, or have we passed the point of no return?

AI race or craze?

Let us take a step back and look at what the current situation actually is: in the past three years alone, we went from writing about new developments such as Cookieless Era, server-side tracking, importance of data governance, the shift to first party from third party data, and of course the rise of AI. When looking at it from this angle, and taking away personal perspective and feelings, the rate of technological development in this area is quite astounding – especially taking into account such a short timespan. Where AI felt quite out of reach for the general population, Large Language Models (LLM’s) alone have completely flipped that narrative on its head, to the point where it has become a part of our daily routine in most cases. These tools have seen a progression that is impressive, but yet again, it does raise plenty of eyebrows. LLM’s such as the giants ChatGPT (OpenAI) and Gemini (Google), but also smaller or more specialized alternatives like Claude (Anthropic), DeepSeek and Grok (xAI), have forced their way into the mainstream, where it is not only used by professionals, but by everyone. From simple dinner recipe ideas to actual lifestyle tips, which sometimes leads to disastrous consequences, it seems people are adopting this new idea or technology without giving it too much thought.

Something that has bugged me personally in this AI craze is how easily people just make use of AI tools without really putting any thought into it. Photos are being uploaded to create funny and parodic (and sometimes scarily accurate) pictures, random user data is entered into LLM’s such as Gemini and ChatGPT, confidential work documents are pasted in to get a quick summary, and highly sensitive medical symptoms are shared with AI chatbots like it is nothing. In other words, entrusting our most private data to platforms of which we know nothing about, not its inner workings or necessarily the people behind it. Moreover, we are almost forced to make use of AI, whether we like it or not. Take a look at your smartphone for example, I think we all know by now your smartphone is “always listening”. Targeted advertisements seem to pop-up almost immediately after discussing a new brand of shoes out loud with it nearby for example, or Google being able to follow your travels to a scary level of detail, just to name two examples. And now, due to the relentless push for AI, our everyday apps are being invaded with it without even giving us a choice. For example, is an AI assistant for WhatsApp really necessary? Of course there is no way to disable it, and despite Meta swearing it does not circulate your data from your personal accounts, can we ever be fully certain? Regardless of what happens with said data, Meta and Google’s track records for collecting fines due to violating user privacy regulations in the US and EU tells us everything we need to know in order to be, in my personal opinion, rightfully sceptical of this ordeal: data is valuable, and personal data is even more valuable.

The rise of digital awareness

Besides enthusiasts who only see the positives of this new development, there are also plenty of skeptics who are not so keen on the constant stream of new technological integrations into our everyday lives. It is an interesting trend to see the increasing awareness of our digital footprint and the (sub)conscious decisions we take in regard to it. Consequently, it raises the questions of how much control we really have when it comes to our data as the general public, but also, how can businesses ensure transparency and privacy to their consumers? Usercentrics has released an insightful report on this based on research it has conducted by surveying 10.000 consumers across Europe and the US that are frequent internet users, which in turn sparked my curiosity. The goal of this study was to create a better understanding on the current status of data privacy and digital trust in today’s world. More than half of the respondents have signaled trust issues in regard to the use of AI and to provide a better picture of the statistics mentioned in the report, see its most crucial findings summarized below:

  1. Consumers nowadays are more aware of their personal data and its value. When transparency is lacking, trust will inevitably decline as the idea of control fades away. Around 60% feel like their data has become an object or product and are uncomfortable with their data being used as ‘fuel’ to train AI, which does not even fundamentally benefit them in most scenarios.
  2. Awareness of cookie banners has increased quite a bit, to the extent where consumers now actually read the banners. Additionally, clicking “Accept all” has become less of the obvious reflex it was a couple of years ago as nowadays it reflects a more permanent decision (almost 50% of respondents!).
  3. Consumers are losing trust in brands as long as they are not fully transparent in what happens to their data. Transparency on data collection and usage has now become the number one trust factor for over 40% of the participants in this study.
  4. Lastly, and maybe most importantly, the report shows that consumers actually care about their privacy, but feel unsure of what it means and how it works. A staggering 77% of the respondents do not understand how their data is collected and used by businesses.

While some of these conclusions are worrying, especially knowing the issue is a lack of understanding, it also shows a clear solution to the issue, namely transparency. To most users, as proven by this research, AI still feels like a “black box” which is an incredibly poignant comparison as this is exactly the analogy we use for the consumer’s thought process in (digital) marketing and consumer psychology (Usercentrics, 2025).

This becomes even more poignant when AI is used to break open this “black box”. For example, look at analytics tooling like the various Google Analytics editions: it provided us with an insight into consumer journeys that we could then analyse more in depth if needs be, but its latest iteration, Google Analytics 4, is now actually able to predict and pre-emptively analyse a consumer’s on-site behaviour based on collected data that has been fed to its underlying algorithm. Obviously this is not the only platform that does this and we see it happen within all different types of media. These datasets are also used to create top-notch hyperpersonalized marketing. Where there used to be a couple of persona’s that determined a specific marketing strategy, targeted marketing nowadays needs to be as specific as possible, giving way to the “they’re always listening” sentiment. Some examples include stuff we might not even be aware of, like Netflix’ personalized recommendations, Spotify’s annual Wrapped, Starbucks’ app, and so on. This couldn’t be possible without collecting and using user data to train AI-models that would enable these hyperpersonalized experiences. Despite the feeling of control slowly slipping away, it isn’t impossible to reclaim agency to an extent, even in a landscape where online privacy seems to be unattainable.

Reclaiming a sense of agency

The bottom line is that it is a two-way street; consumers should be aware of what they can do and how they can ensure their own privacy, but it should also be the responsibility of businesses to be transparent and educate their consumers. So how can we reclaim this agency if uncertainty and a lack of understanding is prevalent? From a business perspective, as mentioned in the beginning, it should be the responsibility of a business to provide the necessary information to enable consumers in making well informed decisions. This does not have be formatted as hosting webinars or creating whitepapers for example, albeit an interesting idea, but more so by ensuring a comprehensive consent banner, clear and concise cookie/privacy notices, and above all else, honesty and transparency about the data you collect, why you collect it and how it is used for business practices – especially when it comes to AI. More importantly, give people a choice. As mentioned in the first section, integrating an app with an AI feature like WhatsApp’s Meta AI is fine, but allow people who are uncomfortable with it to disable it, regardless of whether it actually collects user data, especially if their only reinforcement of trust is a corporation’s ‘word’.

Now from a consumer perspective, it might be more difficult to fully engross yourself in this sphere due to the sheer size of it and it being quite complicated to understand. Oftentimes it is a deliberate choice by businesses or other organisations to experiment with the grey areas and boundaries of what is allowed. Nonetheless, simple things such as reading a consent banner and privacy/cookie notices when entering a website is a great first step towards understanding what data is collected. Additionally, try and stay up to date with any app’s release or update cycles to stay informed whether your favourite social or shopping apps will start collecting more or additional data. Usually it is common practice to notify users whenever they release a big update, mainly because it is often required by law or regulations to do so, so this is a great cue to read the patch notes for example. And lastly, maybe the most dramatic step of them all, if you do not feel comfortable with your data being collected or used for AI training or marketing purposes, maybe consider stopping using a specific app, website or service. This would be the most surefire way to prevent your personal data from being used.

Conclusion

In short, the solution to the problem is not as difficult as we might think. I personally do think we need to face reality and accept that true online privacy has been a myth for quite a long time now, and truly by our own accord, but we still can make active choices – from both a consumer and business perspective. Feasibility is a spectrum after all and might differ from person to person. Regardless of personal sentiments and emotions, the need for understanding our role in this landscape cannot be underestimated if we want to work towards a fair and balanced digital future, and this understanding starts with us, the individual.

Author
Dave Swart
Data Specialist & Lead Strategy at SDIM

SDIM is a sponsor of the DDMA Digital Analytics Summit 2025 on October 9th. Get your tickets here.

Turning data into action: why dashboards aren’t enough

Organizations today pour huge investments into data. They build data warehouses and BI dashboards, calculate customer lifetime value (CLV), even develop MROI (Marketing ROI) models. Yet strangely, many never translate those insights into real change. It’s as if all that effort disappears on a shelf. As one Forrester analyst wryly observes, when firms stop producing reports and dashboards, nobody even notices – a sure sign that the dashboards weren’t driving any action. In short, a dashboard alone doesn’t change a customer journey.

The pitfall of “data as end product”

The data world has its own trap: treating data as the final prize. Many companies think a well-stocked data warehouse or a sleek BI dashboard is the goal. But in reality, data should be a means to an end, not an end in itself. As ThoughtSpot notes, dashboards can “bury key insights under layers of noise” and often fail to move the needle on business outcomes. In practice, teams often fill dashboards with every metric they can imagine – assuming “more data = better decisions.” But this leads to exactly the opposite: overloaded dashboards leave leaders overwhelmed and seeing nothing important. In fact, research shows most dashboards are rarely revisited after creation, turning them into little more than digital clutter.

What’s worse, insights that sound valuable remain purely theoretical unless applied. For example, you might easily identify via analytics that a certain customer segment has a very high CLV (customer lifetime value), but unless you actively approach those customers differently, that knowledge stays abstract. A highly detailed model of which segment is most valuable has no impact until you change your marketing or service to leverage it. In other words, data without action is like knowing a treasure is buried under your backyard but never digging.

From analysis to activation

Building solid data infrastructure and governance is necessary – but it’s only step one. True impact comes when insights translate into action. This means moving from “analysis” to real-world “activation.” Examples of activation include personalized communications, dynamic customer journeys, and real-time channel optimization. For instance, linking CLV insights to marketing spend lets you treat your best customers differently: high-CLV segments might receive premium offers or dedicated support channels, boosting ROI. Similarly, robust attribution models (like multi-touch attribution or marketing mix models) help allocate budget to the most effective channels – essentially turning data analysis into smarter spending.

In practice, companies that integrate data into operations see clear wins. One report notes that brands using first-party data in their marketing “see an 8× ROI, over 25% lower CPA, and up to 2.9× revenue growth.” Personalization powered by this data also “increases customer retention, cuts costs, and delivers experiences that consumers now expect.” In short, when analytics drive real campaigns – not just reports – the numbers jump.

A personalization maturity model

Think of your data journey like learning to drive. At first, you’re just learning the controls; later, you automate and finally, you navigate complex streets on autopilot. Similarly, we can map five stages of personalization/data activation maturity:

  1. Data Collection: Everything is captured in the data warehouse. You have dashboards and reports, and you can answer “what happened?” via CLV calculations, MROI models, etc. (This is the baseline – many companies stop here.)
  2. Analyze & Report: You use BI tools and attribution models to analyze data (looking at KPIs, performance, MROI, CLV by segment, etc.). You understand what the data says, but you haven’t changed any process yet.
  3. Segmentation & Simple Activation: The first layer of action. You group customers (e.g. by CLV tier or behavior) and run basic targeted campaigns (email segments, ad audiences) based on those segments.
  4. Automated Personalization: Data feeds your systems in real time. You build dynamic journeys, AI models or recommendation engines so content adapts for each user “always-on.” For example, your site shows different product suggestions instantly based on user’s profile and behavior.
  5. Fully Customer-Driven: Data flows into every touchpoint continuously. Every interaction – website, app, in-store, support call – is tailored in real time to that individual’s context and needs.

Where does your organization fit on this spectrum? Are you still mainly looking at dashboards (level 1–2), or are you actively using those insights to power automated personalization (levels 4–5)? Many teams never progress beyond stage 2 – data is collected and analyzed, but then it just sits there.

Why activation is crucial now

Today’s customers expect this activation. In fact, 71% of consumers say they expect personalized interactions from companies. If those experiences aren’t delivered, people quickly get frustrated (76% report frustration when personalization fails). Simply put, personalization has become the new baseline for customer experience.

And the business case is clear: organizations that move beyond vanity metrics see real performance lifts. McKinsey found that companies excelling at personalization generate 40% more of their revenue from those efforts than slower-growing peers. First-party data and personalization pay off: not only do they improve loyalty and satisfaction, they deliver lower costs and higher revenue. For example, using behavioral data in marketing can slash acquisition costs by over 80%, boost customer satisfaction by nearly 80%, and improve conversions and ROI by roughly 70–80%. In practice, this means lower cost-per-acquisition, higher lifetime value, and stronger customer loyalty – exactly the outcomes executives care about.

Ultimately, the competitive edge comes not from hoarding the biggest data troves, but from activating data most effectively. In a world of privacy changes and data noise, simply collecting more data won’t win. The winners will be those who quickly turn insights into personal, relevant experiences that customers actually notice.

Turning dashboards into decisions

Collecting data and filling dashboards should be just the beginning. The real value lies in what you do with the data. If your analytics tools only end up illuminating problems but not informing action, you’re missing the point. As Forrester notes, organizations often stop making dashboards, only to find that no one even calls complaining – revealing that the dashboards were never really used to begin with.

Ask yourself: Is it enough that you have a dashboard, or are you using that knowledge to change the customer’s experience? If the answer isn’t immediate, it might be time to shift focus. Use a personalization/data-activation maturity framework as a roadmap: move from collecting data (phase 1) up through to fully automated personalization (phase 5).

At GX, we believe it’s not data itself but data-driven action that transforms businesses. Data should spark decisions – not sit unused. So as you invest in analytics, ask: how will the customer feel the impact? Take the next step from measurement to real activation, and start turning insights into tangible customer value.

Author
GX has been driving digital success for over 25 years, combining expertise in data, technology, and strategy. With a team of more than 100 specialists, they design and develop solutions that connect brands with their audiences in meaningful ways. As part of the Happy Horizon agency group, they bring together in-house technical expertise and a broad network of digital marketing talent to deliver lasting impact. GX is a Sponsor of the DDMA Digital Analytics Summit 2025 on October 9th. Get your tickets here.

Third-Party Cookies: Where We Are and What’s Next

The digital advertising world is in a stalemate between two opposing forces: the growing demand for user privacy and the need for effective, data-driven advertising. Third-party cookies are caught in the middle.

How we got here

Since 2017, when Apple first blocked third-party cookies in Safari, the focus on user privacy has only intensified. Mozilla’s Firefox and Microsoft’s Edge followed suit, building momentum toward a third-party cookie-free future.

For years, advertisers found ways to adapt, minimizing the impact of these changes. With Apple, Mozilla, and Microsoft representing just 26% of the global market (31% in Europe), the vast majority of internet users—those on Google Chrome—remained untouched. The industry worked around the limitations, but the balance was always delicate.

Then, in 2023, that balance was disrupted. Google’s Chrome, commanding 65% of the global market (61% in Europe), woke a sleeping giant with its plan to phase out third-party cookies by 2024, sending shockwaves through the industry. A phase-out in Chrome would be a decisive blow to advertisers, impacting most internet users in one stroke.

However, Google soon hit pause. Citing the need for more time to refine alternatives, the phase-out was delayed to 2025. Then, in July, the decision was further postponed, creating an air of temporary relief for advertisers and uncertainty for everyone else. While some interpret this as indecision, others see it as a carefully calculated move to preserve balance in a rapidly shifting landscape.

What to Expect Now

With our focus on data collection, quality and compliance, Cloud Nine Digital has been closely watching Google’s decisions. The tug-of-war between privacy advocates and the advertising industry is intense, and Google is positioned right at the center. The company’s approach is clear: it seeks to maintain a balance between respecting user privacy and preserving the effectiveness of online advertising.

Chrome and Google’s marketing division operate separately, allowing Chrome to lead the charge for greater privacy without directly threatening Google’s broader advertising business. Yet, these teams aren’t isolated. Their collaboration is evident in Google’s investments in Privacy Sandbox APIs and first-party data solutions (such as Enhanced Conversions and Customer Match), which provide alternatives to third-party cookies.

As these privacy-focused tools become more sophisticated, the tension between privacy and advertising is likely to ease. Once these tools reach maturity, Google could resume its plan to phase out third-party cookies, trusting that the new alternatives will keep advertisers in the game. It’s a balancing act, ensuring that advertisers can still thrive in a privacy-first environment.

Rather than abruptly eliminating cookies, we believe Google will take a softer approach, introducing a user consent banner in Chrome. This would let users opt-in to cookies or similar tracking methods, much like Apple’s in-app permission system. The famously low opt-in rates seen by Apple (roughly 25%) could be exactly what Google is looking for. This solution allows users to exercise control over their data, and organically phase out cookies while keeping the door open for advertisers to deliver tailored experiences. In the process, Google may strengthen their position in the advertising landscape, as by then they will have made sure its alternative solutions perform well, while other vendors with a higher dependency on third-party cookies and fewer options to team up with Chrome may still suffer a big blow.

Impact on the Industry

The tension between privacy and advertising is especially clear in the ongoing reliance on third-party cookies. Audits of Cloud Nine Digital customers show that around 50% of vendors still depend on them, particularly in top-of-funnel and retargeting solutions like Criteo and RTB House.

Other platforms, like Google Ads, Meta Ads and Bing Ads, have built much of their success on third-party cookies to drive marketing performance. And although they have made many changes to switch to a more first-party focused setup, they continue to use them to power their user recognition and targeting capabilities. The potential loss of third-party cookies threatens to disrupt this balance, making it harder to deliver effective ads and pushing up costs for advertisers.

Preparing for the Shift

The delicate balance between privacy and advertising is shifting, but the bell certainly tolls for the third-party cookie. While the exact timeline remains uncertain, what’s clear is that the industry must be prepared. Keeping an eye on developments—particularly user consent mechanisms within browsers—will help you stay ahead of the curve.

Marketers should also audit their tech stacks, assessing how much they rely on third-party cookies and determining which platforms contribute most to traffic, conversions, and revenue. By understanding the role cookies currently play, businesses can adapt to the coming changes and remain competitive in a privacy-conscious world. The key will be finding new ways to strike that balance between user privacy and advertising performance.

About the authors

Thomas de Ruiter is Director Solution Design and Guy Rippe is Head of Data Collection at Cloud Nine Digital, an Amsterdam-based Data & Analytics agency specialized in data collection, processing and compliance to support companies in building and maintaining their privacy-first fundament for data-driven decision making in digital.

 

The Challenges and Opportunities of data collection in today’s Digital Era

Being able to track and collect the data you want from your website is no longer a given. The digital landscape has changed so much, not only on a technical level but also from a legal standpoint. The rules for digital marketing have become tighter every year and not without reason, as they safeguard the protection of its online users. This forces us to really think about how we will use this data. What technical aspects do you need to consider when collecting data? How will you ensure compliance with local or international regulations? To what extent will this affect your profits? And lastly, what tools can help you not only collect this data, but also effectively use it? In this article, we’ll explore some of these regulations, new developments, and exciting chances and opportunities. 

 About the author & translator: Steven van Eck (author) is Web Analytics Specialist, and Dave Swart (translator) is Data Specialist & Lead Strategy at SDIM. 

Legal and technical limitations 

When it comes to collecting and processing data, the restrictions are quite varied. We have detailed the most crucial legal limitations that will directly influence these processes below. 

 GDPR 

Ever since its inception in 2018, the GDPR (also known as AVG in The Netherlands) has significantly restricted the collecting of personal data. Businesses are now required by law to be transparent about the data they collect and how they intend to use it. One of these (very familiar) changes is the explicit request for consent to store and process user data. This is a clear indicator that user privacy has become, or even has been, the epicentre of this new era. 

 Cookiewall 

One of the direct consequences of the GDPR has been the implementation of cookiewalls on websites. This lets users consent to or refuse the placement of cookies such as marketing cookies on their devices and browsers. However, implementing a cookiewall on a website does have a direct and large impact on the quantity of data collected. A sizable decrease in data is very common, but also largely depends on how the cookiewall is configured, making this a crucial step during a tracking setup. 

 Adblockers 

Another familiar consequence that has garnered quite some fans in recent years are adblockers. These tools can assist with matters such as automatically blocking browser tracking scripts, even if a user consented to it. Like cookiewalls, the share of lost data due to these blockers can be quite substantial, with some websites experiencing losses reaching into the higher percentages. This makes it more difficult for digital advertisers to assess campaign results, consequently limiting the ability to optimise said campaigns. 

 ITP and ETP 

Apple’s Intelligent Tracking Prevention (ITP) and Mozilla’s Enhanced Tracking Prevention (ETP) are techniques that limit third-party tracking. These techniques are built into the browsers of Safari and Firefox and affect the way we collect our data, depending on the user, their browser and its settings. This can range from blocking trackers by default, (automatically) denying third-party cookies or even reducing the lifespan of said cookies. Even our first-party cookies are not off limits for these restrictions, as their lifespan is severely reduced if the user decides not to change any settings. Moreover, blocking or limiting third- and first-party cookies will also negatively impact the accuracy of user recognition and by extension their value attribution. 

 An ethical approach 

Besides the legal ramifications and technical hindrances, we also need to develop an ethical conscience or approach towards this matter. This is why it is vital to strike a good balance between collecting actual valuable data while simultaneously respecting user privacy and wishes. Is the data you currently collect absolutely useful and/or necessary? And what effect does this have on user trust? There are businesses and organisations that choose to collect less data, even if it is technically allowed. Sometimes offering up valuable resources such as data might increase the level of trust users will have in you, ultimately positively impacting user loyalty and retention. 

Chances and opportunities 

Despite the many limitations that make it more difficult to collect data, while also respecting laws and regulations and overcoming various technical difficulties, there are two main reasons not to be discouraged. Besides the fact that the rules apply to everyone, there still are plenty of chances and opportunities out there to benefit from. 

 First-party data 

We could probably all agree that randomly sharing your or anyone’s data with third parties is something best left in the past. The data you collect from your own channels like websites, apps or physical storefronts, however, has become incredibly valuable. These datasets can be stored locally in a data warehouse without having to necessarily rely on a third party. Simply put, you own this data. Furthermore, this data can be analysed and leveraged to gain useful insights into your customers and their behaviour. For example, if you collect email addresses through your website, you can store these and follow-up on them, again, without the need for a third party if you choose to do so. Especially now that third-party cookies are (very slowly) disappearing, this specific form of data is becoming invaluable. 

 Server Side Tracking 

Server Side Tracking serves two important purposes: it offers a technical solution to modern tracking challenges and it protects your data while ensuring user privacy. This is due to not being solely dependent on browser or client-side tracking as most tags will fire from your server environment. Moreover, it prevents adblockers from blocking your tags, allows you to set the lifespan of cookies, and gives you control over what data you want to collect and what tags are fired from the server, as (personal) data sets can be deleted or masked. Oftentimes our influence over client-side tracking is fairly limited, especially when it comes to what data is being sent to various platforms. Tracking scripts that capture personal data from filled in forms for example are not unheard of. That is why server side tracking is a great countermeasure, as it allows you to exert control over these scripts by preventing this data from being sent. 

 Google (and Microsoft) Consent Mode 

Google has developed something called Consent Mode that allows data collection in a limited form. It focusses on users that have declined (full) consent for placing cookies. In these situations, Consent Mode allows you to still gain these insights but in a more privacy-friendly manner, which (in some cases) can drop the use of cookies entirely. The amount of data collected is dependent on the level of consent a user has granted and/or denied. Consent Mode can also contribute to creating a more accurate overview of user activity and behaviour on a website. Besides Google, Microsoft now also released their own version of Consent Mode that is needed in order to ensure the effectiveness of Microsoft pixels. 

 A.I. and Machine Learning 

If we’re discussing hot topics, we definitely cannot leave A.I. and Machine Learning out of the conversation. While arguably some of the most popular developments in recent years, these ‘new tools’ are still in the early stages of their respective lifecycles. Nonetheless, it is expected they will play an even more significant role in the future. For example, think of analysing large data sets or recognizing and predicting patterns and trends. Not only will it assist in making certain processes more efficient, but it can even contribute to decision-making when it comes to data-related issues. 

Conclusion 

Collecting data and maximising your online results can be a huge challenge at times, while simultaneously creating new opportunities we should fully embrace and utilise. Even though we can’t stop the continuous changing of the digital landscape, we can choose to play along. How? By simply making an effort to understand the rules and how the game is played. The fact that we have to be aware of the data we collect does not mean we can’t do it in a way that respects our users and their privacy. Oftentimes we’re too keen on rejecting change because “it’s unfamiliar”, but when all these new rules and limitations force us to think differently, it could open the door to new and creative possibilities. 

Navigating the evolving digital marketing landscape

2024 is another year in which the letters “A” and “I” are on the tips of everyone’s tongues. The emergence of AI is impacting digital analysts as well, but the ultimate course of this revolution is unclear. When discussing changes in the analytics landscape, it’s important to recall the impact of moves by Google, such as the sunset of GA and GA 360, and the retention of third-party data. However, the future might belong to other analytics platforms that often deliver better and more innovative functionalities than the Mountain View giant. 

So, if you are wondering if you should stick with GA4, place your trust in AI, or maybe turn to another analytics platform, this article will provide information to help you navigate through changes happening today and those coming in the near future.  

About the author: Vincent de Winter is Regional Manager (Benelux) at Piwik PRO, sponsor of the 2024 DDMA Digital Analytics Summit. 

Don’t be afraid to embrace new tools 

Google Analytics 4 is tailored for technical teams with considerable budgets, and shows minimal resemblance to its predecessor. This new product has caused quite a stir in the analytics community, leaving marketers with the tough task of trying to adapt. Google’s shift away from universal accessibility has prompted users to explore new tools that are more suitable for their use cases and expertise level.  

To compete in an increasingly demanding market, marketers need solutions that allow them not only to collect data but to activate it and analyze trends in real time. Analytics platforms like Piwik PRO Analytics Suite offer similar functionalities to GA 3, such as reports, session-based analytics and an intuitive interface. But there is so much more out there! Go explore other options if you’re not satisfied with your current software.  

Emerging issues to keep in mind 
  1. Opportunities and risks of the AI wave

Data is the driving force behind analytics platforms and artificial intelligence algorithms. Analytics solutions can leverage AI capabilities to improve efficiency with automated data processing, more precise predictions of consumer behavior, and personalized content tailored for individual users.  

However, growing concerns regarding security and privacy are arising from the use of extensive datasets containing confidential information. The risks of possible exploitation and breaches are significant. Generative AI adds another layer of complexity to things by creating synthetic data that, although beneficial, may carry its own privacy risks. In the face of these threats, prioritizing data privacy must be at the forefront of implementation of AI-based solutions. 

  1. Are third-party cookies here to stay after all?

Most major browsers, except Chrome and Opera, have already restricted or blocked third-party cookies, as their use can potentially compromise users’ privacy. Third-party data is not unique, which means that your competitors can purchase the same datasets from brokers. Digital marketers and analysts cannot rely solely on such types of cookies to reach their desired audience. That’s why they must continue developing other data collection strategies to ensure comprehensive reach across platforms.  

Win the future with a holistic approach 

Adapting to these emerging changes in the digital marketing landscape requires a holistic approach that goes beyond merely switching tools. Here’s how you can prepare your organization for the new era of digital analytics: 

  1. Data strategy

As the shift from third-party to first-party data accelerates, organizations must rethink their data collection strategies. Instead of collecting vast amounts of data with minimal oversight, focus only on the data needed to achieve your business goals. This will not only help you stay compliant with privacy laws but also build trust with your customers. Transparency about what data is collected and how it is used will encourage them to share more information, helping you gain valuable insights. 

A solid data strategy should be built on feedback from various stakeholders across your organization. Understanding their needs and challenges will help you define key performance indicators (KPIs) and set clear goals for your analytics efforts. Focusing on first-party data ensures that critical data points remain under your control and free from vendor restrictions. 

Overall, user data must help you gain actionable insights to address your business use cases. The data you collect should tell you: 

  • How to drive profits and sales. 
  • How to optimize operations. 
  • How to improve team performance. 

In addition, remain open to new channels that might be more cost efficient, or help increase your reach. For example, explore Microsoft Ads, Quora Ads or Amazon instead of relying on Google Ads alone.  

  1. Organizational processes

Ensuring that all departments within your organization understand the value of a data-driven approach is key to growth. Effective collaboration between marketing, HR, sales, and technology teams in implementing data strategies can benefit the entire company.  

You should also choose technology stacks not only for their capabilities, but also for their ease of use. The cost of many platforms (especially the free-of-charge ones, but not all) depends on how many hours you need to pour into them to achieve baseline productivity. If your company doesn’t want to spend additional resources to learn a new platform, you should explore other options that might be more sustainable. 

  1. Technology

Evaluating and matching your analytics needs with available tools is a lot of work, but it can pay off in the long run. You don’t need to choose the most popular tool.  

The analytics market continues to expand, offering a variety of platforms, from simple to complex solutions suited for sensitive industries to those adaptable for small to medium-sized enterprises (SMEs). When evaluating tools, consider factors such as metrics, reports, customization options, and integration capabilities. Figure out how much time and resources your analytics setup would require in each case. 

A well-integrated data stack allows you to connect data from various sources and activate it in other platforms, such as customer relationship management systems (CRMs) or email software. This allows for personalized customer interactions and more effective marketing strategies. 

When a copy of your analytics data is needed in your data warehouse for other teams, choose a tool that enables export to data warehouses natively, like Snowplow, or provides a copy like from GA4 to BigQuery or Piwik PRO Analytics Suite to BigQuery, AWS, Azure, or another SFTP. When choosing elements of your data strategy, you should differentiate between using point and broad solutions to their extremes: 

  • Point solutions, including CRM, analytics, automation, and experimentation, are staying in the marketer’s arsenal.They can be used to: 
      • Execute marketing tactics, such as quick personalization or experiments. 
      • Adjust advertising spend depending on current short-term trends (most popular search terms or most popular pages).
      • Approach customers at the right time with the message they expect. 

 

  • Broad solutions like data warehouses are necessary to execute a first-party data strategy. They allow you to: 
      • Create funnels across the company – from being a lead to making a return. 
      • Execute a broader, long-term strategy – like betting on content that attracts customers with the highest lifetime value. 
  1. Privacy

Incorporating privacy into your data strategy is essential. Regulations like the EU’s Digital Markets Act (DMA) and Digital Services Act (DSA) are in effect, and we can observe how  existing laws such as GDPR are being enforced. In the US, federal data privacy laws are in the making, and guidance on HIPAA and the use of tracking technologies is becoming richer. 

If your company operates in the EU, follow the data protection authority (DPA) guidelines for each country where you process data. Also, when considering compliance, you might want to check CNIL’s consent exemption program, which shows analytics tools that can work without consent.  

It’s crucial to work closely with your legal team as privacy rules continue evolving. They can assist in clarifying your legal responsibilities and ensuring that all teams grasp the legal boundaries and necessary steps for compliance. 

Review how you collect and process data. Reduce the amount of data gathered, identify where the data comes from, and understand why each data category is being processed. Obtain only the data needed to achieve your business goals and ensure adequate anonymization if consent is not provided. Mishandling data can lead to fines and loss of trust.  

Finally, for EU-based businesses, you should think about where your data is stored and hosted to ensure GDPR compliance. Using European analytics platforms with EU hosting can simplify compliance. 

Long-term success in digital analytics 

The digital analytics landscape is rapidly changing, and organizations that can adapt to it will thrive. Crafting a solid data strategy, improving organizational processes, and choosing privacy-compliant technology can position your company for long-term success. Solutions offering functionalities like data activation, real-time dashboards and consent management are compelling alternatives to GA4, allowing you to collect and analyze data in a compliant and transparent way. 

As you navigate the challenges and opportunities of the years to come, remember that you’re doing more than just changing tools: you’re making a fundamental shift in your approach to digital analytics. By taking a long-term perspective and adapting now, you’ll be better prepared for whatever the future holds. 

Key Pillars for Modern Measurement

About the Author: Weiwei Liu-Schröder is Data & Measurement Lead at Google (Sponsor of the 2024 DDMA Digital Analytics Summit)

As someone who grew up in Germany, I am very familiar with the obsession of people for accuracy and good manners in quoting (see the quote below). Goethe’s proverb “One only sees what one knows” highlights the importance of having a solid understanding of the subject matter and relevant metrics to effectively interpret and utilize data. In the world of data and measurement, just having a vast amount of data collected can be overwhelming. Without a clear understanding of goals, user behavior and key performance indicators (KPIs) it can lead to misguided conclusions.

Man sieht nur, was man weiß” (Johann Wolfgang von Goethe) – One only sees what one know)

Working at Google, Goethe’s proverb served me as a reminder to invest time in learning the fundamentals before jumping into interpretation of data. One common behavior I have seen within marketing teams is that they first decide on a storyline and then scramble to find the data to match that storyline and justify ad spend. That is why I want to advocate for the role of analytics as a strong foundation before moving into any investment in attribution, incrementality tests or MMMs or even before paid campaigns. If you have a brand new website, think of establishing a baseline of your website’s current performance. This includes understanding your organic traffic, user behavior, and conversion rates. Having this baseline data provides a reference point to measure the impact of your paid investments later on. Some people take Analytics for granted, they even forget to update their tagging which can prevent data from being captured accurately. This poses the risks of not providing a reliable foundation and thus undermining the tagging infrastructure. To learn more about how to build a durable ads infrastructure with consented data, have a look here.

In the last years I heard many marketeers asking the questions:

    • Which marketing channel drove most users to our website?
    • Which users convert with the largest basket?
    • Can we see which platform brings us the most revenue?

To answer these questions, my colleague Ana Carreira Vidal, together with a group of 40 Google experts, led the creation of  a playbook around Modern Measurement this year which you can download here.

In this playbook she discusses different maturity stages for media effectiveness and proposes a  framework. Interestingly she states that no tool has all the answers, but that most mature frameworks use multiple tools. She also states the need to invest in a durable measurement setup with privacy-centric measurement tools that maximize observed data and leverage first-party data. I want to expand that view to incorporate Analytics to gain a holistic understanding of Modern Measurement. 

1. Analytics – Foundational Pillar for Modern Measurement

Analytics is a non-negotiable tool that helps businesses track their website traffic and measure the effectiveness of their marketing campaigns. However, Analytics can also be used as the foundation for a modern measurement strategy that includes attribution, incrementality testing, and marketing mix modeling (MMM).

Gaining insights into user behavior is crucial to steer your online efforts. Therefore I see huge benefits of starting with Analytics to gain a holistic view and to start streamlining marketing efforts across different platforms. For example, by leveraging the advanced audience capabilities of Google Analytics, combined with the power of BigQuery and Looker, you can gain a deeper understanding of your customers and drive better business outcomes. Having worked in Google Analytics the last few years, I witnessed that the seamless integration with other Google products like Google Ads and the Google Marketing Platform helped unlock some of those first party insights.

How to use: Measure with analytics continuously, before moving onto other pillars

2. Attribution – Underestimated Pillar of Modern Measurement

Attribution is the process of assigning credit for conversions to different marketing channels or touchpoints. This information can then be used to optimize marketing campaigns and allocate budgets more effectively. It has historically been a very valued tool but also at the same time being questioned and underestimated in modern digital marketing. However, in a world where customer journeys have become increasingly complex, pitfalls in attribution, such as traditional models, fail to acknowledge the contribution of earlier touchpoints. Data is often siloed, making it difficult to understand the full user journey. This lack of unified data hinders accurate attribution and makes it difficult to determine which touchpoints played the most significant role in driving the conversion.

The notion that ‘attribution is dead’ has been circulating among marketers. However, a more accurate assessment is that attribution is undergoing a significant transformation to prioritize privacy and long-term sustainability. While our reliance on traditional conversion tracking might diminish over time, the combination of sophisticated modeling techniques and privacy-conscious measurement solutions will enable advertisers to continue effectively evaluating the impact of their ad campaigns. Data-driven attribution will remain the cornerstone of optimizing day-to-day ad activities, offering granular insights, incrementality calibration, and seamless integration with AI-powered bidding strategies.

To steer on value, your attribution model is crucial, as it might be the main source of data. Might it be tROAS or might it be tCPA, your attribution defines your conversion value.

How to use: Look at attribution regularly and on an ongoing basis.

3. Incrementality Testing

Incrementality testing is a method for measuring the impact of a marketing campaign by comparing the results of the campaign to a control group. It uses randomized controlled

experiments to compare the change in consumer behavior between groups that are exposed or withheld from marketing activity while keeping all other factors constant. They are becoming more accessible and popular among advertisers, thanks to more open source resources and increased availability to run experiments on the platform. Analytics can be used to track campaign performance and measure the lift in conversions that is attributable to the campaign. It is the gold standard to measure causality, so it gives the most rigorous view of the incremental value brought by the marketing investment. It gives a snapshot of a concrete strategy at a concrete point in time.

How to use: Use Incrementality Testing every quarter.

4. Marketing Mix Modeling (MMM)

MMM is a statistical technique that can be used to measure the impact of different marketing channels on sales. It uses top-level modeling that utilizes advanced statistics to understand

what drives sales. It measures media investment efficiency on top of base sales and other external factors that impact sales (e.g. seasonality, pricing, economy). It gives a holistic overview of all channels, sales, and  external factors and can also provide a longer-term view of

media impact. It doesn’t require user-level data, making it more future-proof. Because it requires modeling with causal inference  assumptions and at least two years of historical data, it can be expensive to run. Analytics can be used to provide data for MMM, such as website traffic, conversion rates, and revenue.

How to use: Twice a year (However, some advanced advertisers do it quarterly)

千里之行,始于足下” (Laozi) The journey of a thousand miles begins with a single step.

Born in China, I have been raised with the values of perseverance and persistence. That is why I want to close off with an emphasis on the importance of taking small, consistent actions towards achieving a larger goal. It is important in the world of data and measurement to break down the mountain of data into specific metrics and goals to derive actionable insights. It requires incremental progress by continuous analysis, experimentation, and refinement.

I recently had a discussion with a client who wanted to start on the topic of media effectiveness. When I asked them what their hypothesis was, they blanked. They actually didn’t think about what exactly they wanted to test. When I added the question of “What do you want to change if the hypothesis is proven right or wrong?”, a long pause told me that they didn’t really think about it before starting. As Ana stated in her playbook: “each experiment should aim to answer a business decision that is relevant enough to warrant designing an experiment”. Sometimes it might be better to perform a Causal Impact Analysis or a simple A/B Test. Sometimes the answers can be found in your existing data set within your analytics tool like Google Analytics.

If you are new to the Analytics world, there are multiple ways to get started. There is a large and active community of users, developers, and experts out there, starting from broader ones like the Measurecamp community to more specific ones like the Google Analytics community or local Women in Analytics groups, vibrant communities in the world provide a wealth of resources, including forums, blogs and tutorials making it easier to learn, troubleshoot, and stay up-to-date with the latest developments.

I encourage you to get the fundamentals right by building on a solid foundation of Analytics before moving on to combine more measurement effectiveness tools like attribution, incrementality and MMM.

Typically, most mature frameworks use all four pillars: Analytics, Attribution, Incrementality and MMM with Analytics being non-negotiable. Analytics is more than just tracking website traffic. It functions as the base for a modern measurement strategy, fueling the understanding of what marketers know, because in the end “one only sees what one knows”.

About the author:

Weiwei Liu-Schröder is Data & Measurement Lead at Google (Sponsor of the 2024 DDMA Digital Analytics Summit)

Unlocking the Power of Real-Time Data Across Industries

In today’s fast-paced digital world, real-time data is transforming how businesses operate, enhancing customer experiences, driving marketing success, and building trust. This dynamic data enables companies across various industries to immediately respond to customer behaviors and market changes, creating a competitive edge. Let’s explore how real-time data can be leveraged across different sectors.

About the author: Merinda Hillier, VP Marketing EMEA, at Tealium (one of the sponsors of the DDMA Digital Analytics Summit 2024).

1. Retail: Delivering Personalized Customer Experiences

Retailers are leading the way in using real-time data to enhance customer interactions. With real-time insights, retailers can track customer behavior across both online and physical stores. For instance, when a customer browses a product online, a retailer can instantly send them a personalized offer or recommendation through email or a mobile app. Additionally, in-store behaviors, such as lingering in a particular section, can trigger tailored promotions or assistance, boosting the likelihood of a purchase.

This level of personalization not only increases sales but also improves customer satisfaction. By understanding and responding to customer needs in real time, retailers can create a seamless shopping experience that feels personal and relevant.

2. Finance: Enhancing Risk Management and Customer Engagement

In the finance sector, real-time data is critical for managing risks and improving customer engagement. Financial institutions can monitor transactions in real-time to detect and prevent fraudulent activities. For example, if an unusual transaction pattern is detected, the bank can instantly notify the customer and take preventive actions, such as freezing the account or requiring additional verification.

Moreover, real-time data helps banks and financial services companies provide personalized financial advice. By analyzing a customer’s spending habits, income, and financial goals, they can offer tailored investment opportunities or budgeting tips in real time, enhancing customer engagement and trust.

3. Healthcare: Improving Patient Care and Operational Efficiency

Real-time data is revolutionizing healthcare by enabling more accurate and timely patient care. For example, wearable devices can monitor a patient’s vital signs and send real-time data to healthcare providers. If any abnormalities are detected, the healthcare team can intervene immediately, potentially saving lives.

Additionally, real-time data helps healthcare facilities manage resources more effectively. Hospitals can track patient flow and optimize staffing levels or bed availability in real time, ensuring that patients receive timely care without overburdening healthcare workers.

4. Manufacturing: Optimizing Production and Reducing Downtime

In manufacturing, real-time data is essential for optimizing production processes and minimizing downtime. Sensors on machinery can monitor performance and detect potential issues before they lead to breakdowns. This predictive maintenance approach, powered by real-time data, reduces costly downtime and extends the life of equipment.

Furthermore, real-time data enables manufacturers to adjust production schedules on the fly based on demand fluctuations. This agility helps companies meet customer orders more efficiently and reduces waste, contributing to a more sustainable and profitable operation.

5. Marketing: Supercharging Campaigns with Real-Time Insights

Marketing is another area where real-time data is making a significant impact. Traditional marketing campaigns often rely on static data, which can quickly become outdated. However, by using real-time data, marketers can create dynamic campaigns that adapt to customer behaviors as they happen.

For example, a customer who has just browsed a specific product online might receive an email or ad featuring that product within minutes, encouraging them to make a purchase while their interest is still high. Real-time data also allows marketers to test and refine campaigns on the go, optimizing for the best performance.

6. Building Customer Trust Across All Industries

No matter the industry, real-time data plays a crucial role in building and maintaining customer trust. When businesses use real-time data to respond to customer needs promptly, they demonstrate a high level of care and attentiveness. For example, a customer who experiences an issue with a product or service can receive immediate support based on real-time data, preventing frustration and enhancing their overall experience.

Moreover, being transparent about how real-time data is collected and used helps to build trust. Businesses that prioritize data privacy and security, while still delivering personalized experiences, can strengthen their relationships with customers and foster long-term loyalty.

Conclusion: The Strategic Use of Real-Time Data

Real-time data is not just about speed; it’s about making data-driven decisions that enhance customer experiences, improve operational efficiency, and build trust. Whether it’s in retail, finance, healthcare, manufacturing, or marketing, the ability to leverage real-time insights gives businesses a significant competitive advantage. By integrating real-time data into every aspect of their operations, companies can not only meet but exceed customer expectations, driving success in today’s fast-paced market.

For more information and detailed examples of how real-time data can transform your business, explore Tealium.

About the author:

Merinda Hillier

Merinda Hillier, VP Marketing EMEA, at Tealium (one of the sponsors of the DDMA Digital Analytics Summit 2024).

Clean Data, Clear Vision: Unlock the Power of Self-Serve Data and Analytics

Building and maintaining a strong self-serve analytics program is essential for top-performing companies. But not all businesses make it a priority. In conjunction with Women in Analytics, Amplitude hosted a three-part webinar series to tackle that issue and other challenges:

  • Demystifying Data Governance dug deep into why governance matters and how to get started or step up your efforts.
  • Unleashing Self-service Analytics shared the promise of making real-time data available to employees.
  • The Impact of Cultivating a Data-Driven Culture wrapped everything up with advice on gaining buy-in at every level of your organization.

Throughout these three discussions, led by Women in Analytics member Kristy Wedel, Learning Experience Manager at AlignAI, the speakers shared plenty of tips for startups and enterprises alike. Whether you’re looking to take an established program to the next level or struggling to get the resources you need, read on for expert advice.

About the author: Michele Morales is a product marketing manager at Amplitude, sponsor of the 2024 DDMA Digital Analytics Summit.

Speed and agility are winning propositions

Analytics isn’t new, but the way companies gather and use data has completely changed in the last ten years. Digital analytics, especially self-service analytics, can be the difference between growth or a slow descent into irrelevance. As Sumathi Swaminathan, VP of Engineering at Amplitude, says, “Being able to quickly adapt to changing business conditions and market dynamics is critical for companies to win in their space.”

Traditional business intelligence tools often require decision-makers to formulate questions and wait for data to inform solutions. Now, there is no need to choose between speed and information.

“Broadly, I think businesses now understand that speed is the only way to grow revenue, diminish competition, and stay fresh in customers’ minds.”
Sumathi Swaminathan, VP of Engineering, Amplitude

Self-serve analytics enable that agility—and not just at the leadership level. By understanding prospective and existing customer behavior, employees in departments ranging from marketing to product to operations are empowered to make quick, data-driven decisions. Self-serve analytics gives the answers directly to those with the big questions.

The best way to get started: Just do it

Many companies are intimidated by implementing a new self-serve analytics program, especially given the complexity of legacy business intelligence products.

“As you start making faster decisions . . . the value will become very obvious to the business.”
– Sumathi Swaminathan, VP of Engineering, Amplitude

Sumathi recommends a gradual transition plan that eases in new tools while enabling employees to continue using their old system. Offer training, set key performance indicators (KPIs), and launch pilot projects to show fast value.

Carolyn Feibleman, Principal Product Manager at Amplitude, adds that giving the change a positive spin can do wonders. Look to emulate your successful peers, using their data policies to inform yours. Competition can help motivate your employees to embrace your new system.

Analytics is an incremental process

It’s tempting to take advantage of everything a new platforms have to offer, but our panelists recommend launching an analytics program step-by-step.

Before working with data, champions can start by building data glossaries. Even if it’s not at a company-wide level, you can decide within your team what each product metric (user session, repeat visit, monthly active user, etc.) means. Share this glossary with your collaborators, and you’ve already started to build alignment on your company’s approach to data and analytics.

“Any of us who are really responsible for data and who care about where we’re going . . . anyone can start.”
Carolyn Feibleman, Principal Product Manager, Amplitude

Then, start with key questions your team wants answered—two or three pieces of information that can lead to quick wins. Delivering value to your organization will make your team feel proud of their work and prove to leadership that further investment is worth it. Once you have that first win, ask what the next urgent question is and use it to guide your next project. Build momentum by offering a series of small achievements and, before you know it, you’ll have a robust analytics program.

Data governance matters as much as data access

Data wins don’t happen without careful curation of your data, which is why Carolyn advocates for a strong data governance program. Successful governance results in organized data, alignment on who is responsible for data, and strong oversight to ensure your data is well cared for.

It’s easy to make a business case for a data governance strategy, especially considering the compliance issues at stake. Ensuring you’re collecting, storing, and using data appropriately is essential to risk management.

Carolyn points out that executives often assume IT or other technical teams should own risk management because they see it as inherent to a system. But she cautions against that viewpoint. Data breaches come back to the highest levels of leadership. A smart CEO is plugged into their company’s security practices and stays personally aware of potential risks and threats.

“A lot of us don’t think about the cost of having too much or duplicative data.”
Carolyn Feibleman, Principal Product Manager, Amplitude

There’s also the question of optimizing costs and utility when collecting and storing data. For example, data stored twice—or kept for too long—will cost more as it accumulates. And data that comes without metadata will be much less useful. A strong data governance program oversees these storage and labeling practices and multiples the return on investment (ROI) on your data collection efforts.

The three pillars of data governance

Morgan observes that a strong data governance framework has three parts: education, instrumentation, and maintenance.

  • Education refers to understanding what data to collect and store, and whether it complies with company policies and industry regulations.
  • Instrumentation refers to the consistency of data names and labels. Things like capitalization, use of underscores and dashes, and necessary metadata fall within this category. Standardizing your system at the start saves headaches in the future. Once you have a standardized taxonomy, you can start implementing it in your data tool.
  • Ongoing maintenance keeps your data useful and your company in compliance. Accountability matters, whether this effort is owned by a single team or a cross-functional data council.

“[There is] technical debt, and [there is] data debt. . . . We have to fix it. We have to clean it. We have to get it ready and structured.”—Morgan Templar, CEO, First CDO Partners

Always start by defining your goals

Every data program or initiative Amplitude VP of Value Marchelle Varamini launches starts with a plan. A clear objective gives each stakeholder a mission and prevents projects from floundering in the depths of their data collection. What should that objective be? Marchelle always looks to the KPIs that departments track.

“In any conversation and in any decision you’re making, make sure you’re anchored on the business reasons for the work you’re doing, versus it being the work you’re doing to define business reasons later.”
Marchelle Varamini, VP of Value, Amplitude

Marchelle likes to use value maps to start any discussion. She starts by asking what a business is trying to achieve—what it values. Next, she digs into any challenges that surround each value. That’s when she starts thinking about the metrics that can be used to solve them.

Effective data operations are collaborative, not isolated

Strong data operations include two types of collaboration: between employees in different roles on the same team and between different teams across the company. Combined data efforts are a sign of a strong data culture, so Marchelle likes to encourage cross-functional work.

One way to encourage effective collaboration across the company is by creating a team to coordinate collaborative efforts. Companies with mature data operations designate employees to think strategically about how the company will gather and use data, oversee governance, and bridge the gap between executives and employees when questions about data practices arise. Some have a centralized data team; others have a data council that calls in stakeholders from across the company. Either approach can work as long as the team understands the needs of every person who uses data.

For companies that don’t have teams dedicated to thinking about data, Marchelle likes to encourage employees to do this work. She asks questions to make them think about how others might use the same data they’re gathering. Every part of a company is interconnected, and most datasets have a secondary use elsewhere.

Building a data culture from the ground up

Most of us aren’t executives, so we can’t declare our companies will be data-driven and make it so. We have to lead from the bottom up.

As an employee and user of analytics tools, the best thing you can do is use data to tell the story of how you’re making a difference. Showing the impact of your work supports a larger data culture, which is why each of our panelists talked about building projects with milestones and celebrating small wins.

Of course, getting more resources for your grassroots program means winning executives over. Both Morgan and Carolyn agree the CFO can be a strong ally once you find a way to move the needle. Demonstrating a big ROI from a small initial investment will convince them these efforts are worth it.

You may face political challenges—sometimes, data will uncover failures that others won’t want to publicize. Other times, competing organizational priorities take the resources you were hoping for. These are hard to overcome, but on the flip side, data can identify weak spots and point to their solutions. Marchelle says she’s always found it easier to get help if she admits something is wrong. Additionally, data may help you see when two initiatives that seem at odds with each other are connected.

The only way to learn these truths is through experimentation, which takes us back to our second lesson: Sometimes, you have to become a champion and get things rolling.

Experimentation makes our work richer

As companies grow out of startup mode, they tend to lose the taste for curiosity. Some startups fail so fast, it becomes clear they were asking questions about a need that never existed. Data and analytics can help companies navigate both obstacles if they’re used thoughtfully.

Framing analytics as a scientific process, as Marchelle does, helps teams iterate toward growth. After all, if you don’t prove your hypothesis correct, the logical next step is to ask what other factors you might test. This mindset can also help failure-averse teams or corporations rekindle the habit of trying new things. There’s no such thing as a “failed” experiment. Even if yours didn’t turn out as expected, you’ll still learn from your data.

Over the past decade, we’ve seen the power of analytics firsthand as companies have used analytics platforms to navigate a shifting digital product landscape. If you set your company up with a strong data program now, you’ll be ready to keep up with the next decade’s changes.

Key takeaways

  • Self-service analytics are powerful for companies that want to be agile, and the best way to get started is by empowering teams to dive in with pilot projects.
  • Leading companies think about data governance early to reduce risk and ensure efficiency.
  • Questions connected to business value are the best starting point for meaningful analytics work.
  • Effective companies encourage collaboration so teams can understand each others’ data needs and work together to solve problems.
  • A strong data culture comes from both the leadership and the team level, and you can champion a data-driven approach from either.

Shall we not talk about AI for once? Likely your data set is not ready for it. Just not yet.

Did you happen to see the Netflix documentary about Fyre, the once in a lifetime luxury music festival in the Bahamas?

In case you missed it, the overhyped festival, a supposed mix between Coachella and Burning Man, ended up becoming “The Greatest Party That Never Happened”. Everyone was talking about it, it was overhyped, people bought tickets driven by “Fear Of Missing Out” and the organisers had little or no experience with organising such a large festival.

Why is this relevant for a blog on the website of the Digital Analytics Summit you might wonder? Well, there are actually many parallels between the current state of AI and the Fyre Festival. Everyone is talking about AI, it is a bit overhyped, there is a lack of experience and marketers are suffering from “Fear Of Missing Out”.

But there is no need to be afraid you will miss out on AI. Conferences like the Digital Analytics Summit and blogs like this one, give you valuable tips to help avoid making the same mistakes as others made before you.

About the authors: Suze Löbker and Harm Linssen are co-founders of Code Cube (one of the sponsors of the DDMA Digital Analytics Summit on 10 October 2024).

Recent data collection challenges

With online marketing being a significant and indispensable driver for traffic to any webshop, the importance of data is growing year after year. In the “State of Martech 2024” report by Chiefmartec 71% of the respondents (leading martech and marketing operations professionals), reported that they’ve already integrated a data warehouse within their martech stack.

We all acknowledge the importance of data, and we all have to continuously navigate between collecting valuable data on the one hand while still respecting the customer’s privacy and being legally compliant on the other hand.

In the recent past we have seen businesses migrating en masse to server-side tracking (whereas traditionally behavioural data was captured in the browser of the website’s visitor). The goal was clear, to improve data quality and secure it for the longer term because server-side tracking is not affected by adblockers and tracking prevention settings in browsers.

More recently, we have seen almost every website implement Google’s Consent Mode to ensure GDPR compliance while still being able to track valuable data for insights.

Is your data tracking ready for AI?

AI models rely solely on clean and complete data flowing into them. Without correct data, any model lacks the fuel to deliver an intelligent prediction.

Therefore your data collection process (which is the foundation of any data-driven strategy) should be your top priority. Without validated and rich data, you will never be able to be successful with AI.

So in order to ensure the effective execution of marketing AI you should never forget that garbage in, results in garbage out. This applies to the simplest dashboard in which you monitor basic KPI’s and even more so to AI. Therefore, before starting with AI, you need to make sure your data tracking is in order and works permanently. The DataLayer and all tags must do their work well at all times in order to capture- and pass on the right data.

When your tracking works perfectly, it will have a significant positive impact on the costs of your first AI project. The correct tracking setup contributes to the data quality and structure and it will eventually save your data scientists lots of valuable time. Instead of wasting time on checking, cleaning and preparing the collected data they can actually spend more time on developing and improving the AI models.

A well working- and well documented tracking setup contributes to transparency and helps explain the AI model’s logic and its predictions to stakeholders or even customers when needed.

Real-time monitoring: let the AI party begin!

There are some tools available in the market that help you with checking your data collection set-up. What most of these tools don’t do however, is real-time monitoring of the most vulnerable parts in your data collection.

You don’t want your artificial intelligence capabilities to be dependent on the need for human checks. To boost your AI efforts, you will be far better off with a real-monitoring tool which alerts you instantly when something is off and also states in detail what the actual problem is.

Monitoring your dataLayer, tag manager and tags in real-time ensures a constant stream of reliable data at low costs:

  • AI will be working at full speed and full power without any required human interference;
  • There is no longer any need for periodical and time consuming manual (human) checks of the tagging setup;
  • When new code is conflicting with other content or objects it will be instantly detected;
  • Valuable time for debugging is saved by automatically pinpointing the exact cause of an issue and reverse engineering the setup;
  • You will have full control over the tracking on your platform and maximise the quality of the data you collect.

Conclusion

Despite the many challenges and potential pitfalls, the successful implementation of marketing AI is surely feasible.

Data quality is key because any AI model needs good quality data. It is the fuel needed to deliver intelligent predictions. High quality data will save time and

resources for data preparation and maximise the chances of success with AI. Therefore, data quality assurance and correct tracking should be your first priority.

With real-time monitoring of your tracking set-up you have a permanent safeguard in place to protect your data quality. Real-time monitoring of your data collection process will pave the way for being successful with AI.

About the authors: Suze Löbker and Harm Linssen are co-founders of Code Cube (one of the sponsors of the DDMA Digital Analytics Summit on 10 October 2024).

You don’t know what you got, till it’s gone: Why you need to monitor your tracking- and tagging setup

It is probably the biggest nightmare for online professionals, decision makers and data analysts: unreliable and incomplete data.

Within any organisation, data is the foundation for strategic decisions, solving problems and the ability to run and assess marketing campaigns. We are increasingly dependent on the quality of the collected data to stay relevant and competitive.

Moreover, running online marketing campaigns relies for an increasing part on trustworthy data. Algorithms which are used by for instance Google, Microsoft and Meta work more effectively when you feed sufficient and correct data into their systems.

Therefore, it is no surprise that data collection is in the spotlight for quite a while already. Already in 2019 the “tracking rat race” between Safari and Mozilla on one hand and marketers on the other hand was in full swing. Since then, many episodes have been added to the saga with most recently the launch of iOS17 and the removal of utm-tracking parameters. With the introduction of new technologies like server-side tagging and the introduction of Data Warehousing the landscape is getting more technical along the way.

Bad or missing data gives a wrong image of your return on investment, algorithms can’t do their job well and ultimately you are wasting part of your advertising budget. Despite all the attention for data collection, the sad reality in most companies is that stakeholders don’t even know if and when tags are firing or not. Most websites have extensive monitoring in place for their website infrastructure, but the functioning of tag containers and tracking tags are frequently overlooked while they have a significant impact. Correct tagging and tracking, a crucial aspect for collecting consistent and accurate data, is almost always overlooked.

Tracking threats

  1. Tag managers allow marketers to inject code into your website without the help of a developer. At the same time those tag managers don’t have a safeguard in place to stop new code from conflicting with other content or objects. You will not be alerted when tags are not loading, malfunctioning, causing errors and conflicts or slowing down your pages.
  2. Frequent updates on the site as well as third party technologies potentially cause errors which result in a collection gap of several days up to a few weeks. Issues don’t get noticed till it’s too late and the irreparable damage to the data and data quality has already been done.
  3. Dependencies and risks are getting bigger over time due to a growing number of technology partners and tags, each forming a potential point of failure.
  4. A lack of process, lack of knowledge and lack of communication between teams, departments and external agencies poses a risk. This risk is even bigger for companies with multiple domains and larger teams.
  5. Manual online checks of the tagging setup are time consuming and are not consistent. The results are influenced by for example the time of the day, the location, the device and browser. It’s impossible to test all edge cases.
  6. You can’t influence everything when it comes to tracking and tagging, for example the endpoint could be offline or an API could be malfunctioning without you knowing.

Everyone has the same struggle

Sometimes you find out by coincidence you have been missing conversions for a specific period and you can only guess how many you have missed based on the numbers of the previous period. During this time, you have annoyed all those buying customers with irrelevant retargeting ads. This is money wasted on annoying your customers.

You are not alone in this struggle. Most websites are not being protected against the tracking threats described above. An average website regularly encounters issues with error percentages of 5% up to 25%. Basically, you and most of your competitors currently have no oversight or insight into how tags are impacting data collection and the user experience of your visitors.

In the land of the blind, one-eyed data collection currently still is king

You never get a second chance to capture the relevant data of your website’s visitors. Therefore it is fundamental that your tagging- and tracking setup is done in a manner which guarantees a steady and uninterrupted process of data collection. You need to put in place safeguards, which alert you realtime when shit hits the fan. This will allow you to act accordingly when something is happening with your tagging, whatever the cause may be.

Currently the one-eyed man can still be king. But what if you could be the two-eyed emperor? You can have full control over the tracking on your platform, maximise the quality of the data you collect and your marketing tagging will no longer negatively affect user experience.

Besides making sure your dataset is up to par, this will save you a lot of time not having to pinpoint the exact cause of the issue and reverse engineer the setup.

It is possible with real-time monitoring of your tracking and tagging. Automated monitoring ensures a constant stream of reliable data at low costs. Reliable data tracking starts with real-time monitoring and it is the foundation on which any data-driven strategy should be built.

Harm Linssen & Suze Löbker co-founders of Code Cube

About the authors:
Harm Linssen and Suze Löbker are co-founders of Code Cube
(one of the sponsors of the DDMA Digital Analytics Summit on October 12th).

Harnessing First-Party Data: A Balance of Knowledge and Trust

Data has always been an invaluable asset, and this won’t change in the foreseeable future. On the contrary, due to the many developments in the online world, data is becoming even more indispensable. In today’s world, collecting data first-party is becoming an industry standard, which essentially means the user and/or customer data that you collect. But why is first-party data so important and how can it be applied? Well, those questions are about to be answered in this article.

About the author: Steven van Eck is Web Analytics Specialist at SDIM (one of the sponsors of the DDMA Digital Analytics Summit on October 12th. On this day we’ll indulge ourselves in all things digital analytics. Will we see you there? Get your tickets at: shop.digitalanalyticssummit.nl.

Cookieless Era

The “Cookieless Era”, a topic that has held the data community in its tight grip for the past years. While the industry is moving towards this change, achieving a completely “cookieless” environment remains somewhat elusive. Third-party cookies, however, have felt the impact, with numerous browsers actively shielding against these types of cookies.

The disappearance of third-party cookies has had big consequences for online businesses and marketeers. While collecting valuable data is becoming increasingly difficult, targeting relevant audiences is not getting any easier either. Using third-party data from platforms such as Google and Facebook for lead generation or targeting is no longer the obvious choice. You now have to come up with other creative ways to collect and apply data, especially when it comes to determining and achieving your business objectives. Collecting first-party data is now a crucial step in this process.

Ownership of your data

By collecting and storing first-party data, you truly take matters into your own hands. You become the owner of the data you collect and are no longer dependent on third parties like Google or Facebook.

  • Independence from third parties: In case you currently have data stored at third-party organisations, you will always be dependent on those parties for processing and accessibility of that data.
  • No more splintered data: Another great benefit of ‘owning’ your data is that you can store it in one location that is accessible to you at all times. Moreover, this can save you quite some time by eliminating the need to transfer and translate the data from other platforms into one report. One of the most ideal solutions to this is data warehousing.
  • Retaining raw data: By hosting your own data storage solution you can determine how you save and process this data. You can keep it in its rawest form without the need for sampling to lighten the load. Additionally, any miscalculations or -communications about the results can be prevented, as the source material is always within arm’s reach.
  • Determine the expiration date: Data longevity often varies across platforms and tools. Through data warehousing, you can determine the lifespan of your data – whether it’s two months or two years, it is all up to you.
  • Ensuring Privacy and Security: By setting your own parameters for data processing and storage, you assume the responsibility of respecting user privacy, complying with the GDPR (and the Dutch AVG), and the prevention of data breaches.

The importance of strategy

Collecting first-party data in itself is not a simple goal. You have to prepare beforehand and take into account a number of things. To help you prepare, ask yourself the following questions and see if you can provide a sufficient answer:

  • What first-party data do I want to collect?
  • How will I collect this data?
  • Where will I store it?
  • How will I ensure GDPR-/AVG-compliance?
  • What is the purpose of this data?

In other words, don’t just start collecting first-party data, but create an initial ‘data strategy’ that falls in line with not only the values of your organisation, but also its objectives. You want to prevent missing out on important data or creating security risks due to the lack of a proper plan – handle with care! Creating a first-party strategy as a foundation for future online endeavours will most certainly help you in the long run, even if it’s just to keep things organised and prevent any mishaps from happening.

The meteoric rise of Artificial Intelligence

Another topic that cannot be excluded from today’s conversation is Artificial Intelligence (AI). In the past year, the developments around AI have skyrocketed, to such an extent that it is virtually impossible to stay up-to-date with all current and future innovation.

Without data, AI wouldn’t be where it is today, let alone even exist. Behind every AI solution exists an algorithm that has been trained by processing often massive amounts of data. One of the most well-known examples of this, is of course ChatGPT, which uses data from the internet to train and develop its algorithm. An AI solution doesn’t always need public datasets in order to learn. If you’re trying to create an AI that needs to provide you with relevant data, the first-party data you have collected will be its most important ingredient, as this data is unique to your situation.

There are a plethora of conceivable applications when it comes to Artificial Intelligence and using your first-party data sets as the primary source – for example:

  • Analysing and segmenting your customer base.
  • Providing relevant up- and cross-sell recommendations.
  • Predicting customer lifetime value and churning.
  • Creating personalised experiences for previous customers/visitors.
  • Identifying fraudulent behaviour by tracking suspicious activity in your own data.

These are a few examples of the many possibilities in which AI and first-party data can be leveraged as quite the dynamic duo. The coming years will definitely provide us with even more exciting new and unique applications of AI.

From raw data to marketing

Through first-party data, reaching your target audience has become even more interesting, as the type of data you have collected can grant you various outcomes. On your own website, first-party data can be leveraged to create personalised experiences, as mentioned before. Users who land on pages that are adapted to their personal interests are more likely to trigger a conversion action.

Another great way to leverage your first-party data is by using marketing automation. Through marketing automation you can target specific audiences with relevant messages. For example, an automated follow-up email that is sent three days after a visitor has downloaded a whitepaper. While email marketing remains a popular choice for marketing automation, there are other alternatives. Another quite underrated method is text messaging — not the WhatsApp variety, but traditional SMS. Others include (native) site pop-ups and sending notifications through apps, social media or other channels.

First-party data also enables targeting audiences through advertising platforms such as Google and Facebook. These channels allow you to upload the collected data, subsequently granting you immediate access to your desired target audiences within that specific platform. Moreover, you will also be able to reach audiences that share traits of your primary one. It allows for a much more creative yet relevant approach to your marketing activities, which could (very likely!) increase your chances for a higher conversion rate as a result.

Independence, innovation and trust

In the digital world, data plays an increasingly prominent role and the value of first-party data will continue to grow. Collecting and governing data over your own visitors and customers not only offers more independence and control, but also leads the way to personalised experienced and innovative applications such as Artificial Intelligence. Proper data collection and effectively leveraging first-party data will be important cornerstones for the future of data.

In conclusion, the power of data is undeniable, that much has been proven. However, let’s not forget that without consumers, we would not have any data to collect. For that specific reason, please be mindful of the consumer and be transparent with any questions regarding the processing and application of their collected data. If you manage to earn the trust of your customers, the chances are more likely that they would be willing to share their data, and of course, this relationship will always involve a balance of give and take.

The importance of data quality for a data-centric way of working

Based on a recent 2023 Statista study among loyalty customers worldwide, the top purchase drivers are: a plentiful range of products (81%), product availability (80%), data privacy policy (79%) and good customer service (77%). Evidently, having a precise understanding of the customer experience holds great significance. To effectively monitor your customers’ experiences and to make well-founded decisions, you need reliable data for information and insights. Adopting a data-centric way of working is the means to accomplish this.

Ultimately, data provides you and your teams with information and customer insights to improve customer experience. Gaining the ability to perceive your actual customers’ behaviours through data, and leveraging this knowledge to make well-informed business decisions that refine their customer journey, requires placing a 100% reliance on data Adopting a data-centric approach where everyone places their trust in customer data is essential. This article by Jesse Terstappen (OrangeValley, sponsor of this year’s Summit) delves into several prevalent risks associated with upholding data quality, the necessary prerequisites for maintaining it, such as effectively managing data privacy and compliance, and the criteria for establishing a data-centric organization.

The most common data quality risks

  1. Accidentally including personally identifiable information (PII), such as email addresses of your paying customers in the URL parameters. Violating GDPR legislation with fines and reputational damage is a result. Not very common, but comes with an extremely high risk. We spotted this once last year.
  2. Important decisions cannot be fully supported with actionable information due to a loss of analytics measurements after a development update. A gap in the data (a week or even longer) is a common practice at multiple organizations. And as you can imagine, this makes it harder to rely on long term analyses, since you will not be able to draw any conclusions based on year-to-year analyses in the same time period. We see these challenges appear on a quarterly basis.
  3. Overload of Data and Tracking Methods: With the abundance of data available today, it’s easy to become overwhelmed by the sheer volume of information. It’s important to prioritize collecting data that aligns with your organisation’s goals and KPIs. Additionally, having a streamlined approach to tracking methods can help ensure consistency and accuracy.
  4. Attribution Models and Discrepancies: Different advertising and social media platforms often have varying attribution models, leading to discrepancies in reported numbers. Establishing a standardized attribution model or understanding the differences among platforms can help mitigate confusion. Regularly auditing and cross-referencing data from different sources can also help identify inconsistencies.
  5. Lack of Consistency in Reporting & Defining of KPIs: Consistency in reporting is crucial for building trust in data-driven decisions. Having standardised reporting templates, guidelines, and protocols can help ensure that data is interpreted consistently across departments. Ideally, key performance indicators (KPIs) and tracking methods should be defined consistently across the organisation. This promotes a unified understanding of goals and ensures that everyone is working toward the same objectives. However, it’s important to allow some flexibility to adapt to specific departmental needs while maintaining alignment with the overall organizational strategy.

Mastering data privacy and compliance

Proper configuration leads to privacy and compliance. Incorrect privacy settings can cause the unintended collection and storage of sensitive data. Mastering data privacy and compliance is ensuring that privacy settings are configured and maintained correctly to comply with applicable legislation, such as the GDPR. You of course want to avoid fines and reputational damage. Using a GDPR Monitor, including a dashboard featuring real-time alerts can assist with this by automatically overseeing the presence of personal identifiable information (PII) in the data.

Who has access to what personal identifiable information (PII)?

A crucial aspect of compliance involves using user permissions accurately and appropriately managing access to GA4 data. Mishandling user permissions and access controls may lead to unauthorized access to GA4 data. It’s imperative to confine access strictly to authorized users only to prevent sensitive data from falling into the wrong hands. This helps to prevent data breaches and avoid potential legal consequences.

The importance of continuously collecting, tracking and storing valuable data

A proper analytics implementation is of course required to collect and store data to become information and finally insights. On the one hand, we see that a lot of data is labelled ‘invaluable’ due to the incorrect way the data was gathered. Knowing you only have one chance of collecting data, the way you collect data should be correct. Maintaining is another necessity. For instance, we often see that historical information is no longer available due to an incorrect setting of the retention period. Now trends can no longer be spotted and valuable analysis is no longer possible.

A 6-stage data model to handle your data
You can utilise the 6-stage data model below to ensure proper data handling, thereby acquiring valuable insights that serve as the foundation for your actions. The initial phases of this model encompass establishing and maintaining data collection, as well as managing data storage. These initial steps, involving data gathering and storage, are of utmost importance. The subsequent stages—’Transformation,’ ‘Visualisation,’ ‘Analysis,’ and even ‘Activation’ of data—rely heavily on these foundational data aspects: the proper setup and maintenance of data collection, as well as accurate data storage with appropriate retention periods.

How to maintain data quality?

Since analytics platform vendors constantly change their existing features on their platform or launch new ones, organisations often struggle with an outdated analytics set-up. This renders valuable data useless, leading to a squandering of data investments. To counteract this issue, an automated data quality monitor diligently oversees your analytics configuration, providing real-time notifications to the team when adjustments are necessary. This mechanism guarantees the upholding of stringent data quality standards at minimal expenses.

How can you minimize data loss?

With the help of a data quality monitor, you’ll be able to automatically compare trends in today’s data with those from the previous day. Comparing your day-to-day data gives you critical alerts, enabling you to identify instances where a conversion (former goal completion) has been altered due to changes on your website. Comparing day-to-day traffic data to, for example, flag tagging issues, can then be fixed directly. It also ensures the seamless flow of qualitative data into your data storage location. Subsequently, the process of transforming, visualizing and analyzing data can begin.

How to do a reliable data analysis: the famous ‘360 customer view’

First of all, in order to do a reliable data analysis, you first need to make sure events are set correctly and filters are configured accurately to ensure reliable reports. Misconfiguration can result in inaccurate data and analysis when certain events or traffic for example is excluded. This can lead to wrong conclusions, poor decision-making and missed opportunities for improvement.

Moving forward, additional focal areas of significance encompass understanding the distinctions between universal analytics and GA4, navigating the intricacies of varying reported conversion figures, and constructing attribution models.

The difference between Universal Analytics and GA4 output

We all have seen the differences between the output in Universal versus GA4 analytics. These differences result in a decrease in the data trust among our colleagues; the people we want to convince of our analytics insights. There also is a difference between the data shown in the GA4 interface and the raw data. Although Google might say that they’re showing you all the data, GA4 is not showing you 100%. The reason behind this is the focus on speed. Google wants to compete with other analytics platforms based on loading time in the interface. One technique they employed for achieving this, involves session estimation. This is based on a smaller subset of the data. This also explains the differences between UA and GA4 output.

Why do reported conversion digits on social platforms differ from those in your analytics platform?

You’ve might have noticed the differences in how conversions are attributed to paid advertising or social channels. For instance, why does TikTok report a higher count of conversions than your analytics platform? META is also an often heard name within our agency when differences in conversion reports are discussed. These discrepancies stem from the underlying business model of the advertising and social platforms. They profit from a higher number of conversions. How the conversions are assigned to the platform and why the number of conversions differ is because of attribution. Different methods are used that assign credit to various marketing channels or touch points along the conversion path. GA4 now uses three different attribution models:

  1. Standard channel group for new users: First Click
  2. Standard channel group for sessions = Last Click
  3. Standard channel group for conversions = Data Driven

Build your own attribution model

You can put your organization in control over the challenges posed by differences between attribution models by creating and managing your own attribution model. If you want to make full use of all available GA4 data, making use of BigQuery can be a viable option. Trough the integration of the BigQuery plugin, the raw data can be used. With the help of SQL, your team can reproduce the reporting options available in GA4 and even customize it. This makes it possible to define and use rule-based marketing attribution models, using logic that you own and can change. This puts you in the driver seat of attribution!

What are the conditions required to transform to a data centric organization?

While data-driven digital marketing focuses on using data as a tool, data-centric digital marketing goes a step further by viewing data itself as a valuable asset. It means seeing data as an essential business asset that is central to making decisions and developing marketing strategies. Collecting, storing and managing data is key to gaining valuable insights into customer behavior and trends. A data-centric approach is essential for organisations that want to grow and compete in a digital environment. By seeing data as a valuable asset, companies can differentiate themselves from their competitors and gain valuable insights that lead to effective marketing strategies and a better customer experience.

The four fundamental aspects for a data-centric organization

  1. Maintenance: Data quality is set-up properly and maintained constantly;
  2. Knowledge: Uniform understanding among all company stakeholders regarding tracked elements and the significance of various metrics;
  3. Application: Every employee knows how to use data whenever relevant and possible;
  4. Trust: Fostering a sense of confidence and reliance on data throughout the organization.

An instance illustrating how our client assists all stakeholders across the organisation in comprehending both the tracked elements and the various metrics is through the utilisation of a ‘KPI catalog.’ This catalog encompasses all triggers and definitions of measures presented in a comprehensible language for all stakeholders within their organisation.

Conclusion

Research underscores the paramount importance of data quality in a data-centric approach to business. Understanding customer preferences, driving decisions, and enhancing customer experiences depend on accurate and reliable data. Aspects such as data privacy, consistency, and proper setup play vital roles in maintaining data quality. Organisations must establish strong data fundamentals, automated quality monitoring, and reliable analytics implementation to navigate the challenges and unlock the benefits of a data-centric approach. Trust, knowledge, application, and maintenance are the cornerstones of such a transformation, enabling effective decision-making and superior customer engagement. Reliable data and employee data trust are fundamental for building thriving data-centric organisations in the future.

Author: Jesse Terstappen, Data Analyst, OrangeValley