You don’t know what you got, till it’s gone: Why you need to monitor your tracking- and tagging setup

It is probably the biggest nightmare for online professionals, decision makers and data analysts: unreliable and incomplete data.

Within any organisation, data is the foundation for strategic decisions, solving problems and the ability to run and assess marketing campaigns. We are increasingly dependent on the quality of the collected data to stay relevant and competitive.

Moreover, running online marketing campaigns relies for an increasing part on trustworthy data. Algorithms which are used by for instance Google, Microsoft and Meta work more effectively when you feed sufficient and correct data into their systems.

Therefore, it is no surprise that data collection is in the spotlight for quite a while already. Already in 2019 the “tracking rat race” between Safari and Mozilla on one hand and marketers on the other hand was in full swing. Since then, many episodes have been added to the saga with most recently the launch of iOS17 and the removal of utm-tracking parameters. With the introduction of new technologies like server-side tagging and the introduction of Data Warehousing the landscape is getting more technical along the way.

Bad or missing data gives a wrong image of your return on investment, algorithms can’t do their job well and ultimately you are wasting part of your advertising budget. Despite all the attention for data collection, the sad reality in most companies is that stakeholders don’t even know if and when tags are firing or not. Most websites have extensive monitoring in place for their website infrastructure, but the functioning of tag containers and tracking tags are frequently overlooked while they have a significant impact. Correct tagging and tracking, a crucial aspect for collecting consistent and accurate data, is almost always overlooked.

Tracking threats

  1. Tag managers allow marketers to inject code into your website without the help of a developer. At the same time those tag managers don’t have a safeguard in place to stop new code from conflicting with other content or objects. You will not be alerted when tags are not loading, malfunctioning, causing errors and conflicts or slowing down your pages.
  2. Frequent updates on the site as well as third party technologies potentially cause errors which result in a collection gap of several days up to a few weeks. Issues don’t get noticed till it’s too late and the irreparable damage to the data and data quality has already been done.
  3. Dependencies and risks are getting bigger over time due to a growing number of technology partners and tags, each forming a potential point of failure.
  4. A lack of process, lack of knowledge and lack of communication between teams, departments and external agencies poses a risk. This risk is even bigger for companies with multiple domains and larger teams.
  5. Manual online checks of the tagging setup are time consuming and are not consistent. The results are influenced by for example the time of the day, the location, the device and browser. It’s impossible to test all edge cases.
  6. You can’t influence everything when it comes to tracking and tagging, for example the endpoint could be offline or an API could be malfunctioning without you knowing.

Everyone has the same struggle

Sometimes you find out by coincidence you have been missing conversions for a specific period and you can only guess how many you have missed based on the numbers of the previous period. During this time, you have annoyed all those buying customers with irrelevant retargeting ads. This is money wasted on annoying your customers.

You are not alone in this struggle. Most websites are not being protected against the tracking threats described above. An average website regularly encounters issues with error percentages of 5% up to 25%. Basically, you and most of your competitors currently have no oversight or insight into how tags are impacting data collection and the user experience of your visitors.

In the land of the blind, one-eyed data collection currently still is king

You never get a second chance to capture the relevant data of your website’s visitors. Therefore it is fundamental that your tagging- and tracking setup is done in a manner which guarantees a steady and uninterrupted process of data collection. You need to put in place safeguards, which alert you realtime when shit hits the fan. This will allow you to act accordingly when something is happening with your tagging, whatever the cause may be.

Currently the one-eyed man can still be king. But what if you could be the two-eyed emperor? You can have full control over the tracking on your platform, maximise the quality of the data you collect and your marketing tagging will no longer negatively affect user experience.

Besides making sure your dataset is up to par, this will save you a lot of time not having to pinpoint the exact cause of the issue and reverse engineer the setup.

It is possible with real-time monitoring of your tracking and tagging. Automated monitoring ensures a constant stream of reliable data at low costs. Reliable data tracking starts with real-time monitoring and it is the foundation on which any data-driven strategy should be built.

Harm Linssen & Suze Löbker co-founders of Code Cube

About the authors:
Harm Linssen and Suze Löbker are co-founders of Code Cube
(one of the sponsors of the DDMA Digital Analytics Summit on October 12th).

Harnessing First-Party Data: A Balance of Knowledge and Trust

Data has always been an invaluable asset, and this won’t change in the foreseeable future. On the contrary, due to the many developments in the online world, data is becoming even more indispensable. In today’s world, collecting data first-party is becoming an industry standard, which essentially means the user and/or customer data that you collect. But why is first-party data so important and how can it be applied? Well, those questions are about to be answered in this article.

About the author: Steven van Eck is Web Analytics Specialist at SDIM (one of the sponsors of the DDMA Digital Analytics Summit on October 12th. On this day we’ll indulge ourselves in all things digital analytics. Will we see you there? Get your tickets at:

Cookieless Era

The “Cookieless Era”, a topic that has held the data community in its tight grip for the past years. While the industry is moving towards this change, achieving a completely “cookieless” environment remains somewhat elusive. Third-party cookies, however, have felt the impact, with numerous browsers actively shielding against these types of cookies.

The disappearance of third-party cookies has had big consequences for online businesses and marketeers. While collecting valuable data is becoming increasingly difficult, targeting relevant audiences is not getting any easier either. Using third-party data from platforms such as Google and Facebook for lead generation or targeting is no longer the obvious choice. You now have to come up with other creative ways to collect and apply data, especially when it comes to determining and achieving your business objectives. Collecting first-party data is now a crucial step in this process.

Ownership of your data

By collecting and storing first-party data, you truly take matters into your own hands. You become the owner of the data you collect and are no longer dependent on third parties like Google or Facebook.

  • Independence from third parties: In case you currently have data stored at third-party organisations, you will always be dependent on those parties for processing and accessibility of that data.
  • No more splintered data: Another great benefit of ‘owning’ your data is that you can store it in one location that is accessible to you at all times. Moreover, this can save you quite some time by eliminating the need to transfer and translate the data from other platforms into one report. One of the most ideal solutions to this is data warehousing.
  • Retaining raw data: By hosting your own data storage solution you can determine how you save and process this data. You can keep it in its rawest form without the need for sampling to lighten the load. Additionally, any miscalculations or -communications about the results can be prevented, as the source material is always within arm’s reach.
  • Determine the expiration date: Data longevity often varies across platforms and tools. Through data warehousing, you can determine the lifespan of your data – whether it’s two months or two years, it is all up to you.
  • Ensuring Privacy and Security: By setting your own parameters for data processing and storage, you assume the responsibility of respecting user privacy, complying with the GDPR (and the Dutch AVG), and the prevention of data breaches.

The importance of strategy

Collecting first-party data in itself is not a simple goal. You have to prepare beforehand and take into account a number of things. To help you prepare, ask yourself the following questions and see if you can provide a sufficient answer:

  • What first-party data do I want to collect?
  • How will I collect this data?
  • Where will I store it?
  • How will I ensure GDPR-/AVG-compliance?
  • What is the purpose of this data?

In other words, don’t just start collecting first-party data, but create an initial ‘data strategy’ that falls in line with not only the values of your organisation, but also its objectives. You want to prevent missing out on important data or creating security risks due to the lack of a proper plan – handle with care! Creating a first-party strategy as a foundation for future online endeavours will most certainly help you in the long run, even if it’s just to keep things organised and prevent any mishaps from happening.

The meteoric rise of Artificial Intelligence

Another topic that cannot be excluded from today’s conversation is Artificial Intelligence (AI). In the past year, the developments around AI have skyrocketed, to such an extent that it is virtually impossible to stay up-to-date with all current and future innovation.

Without data, AI wouldn’t be where it is today, let alone even exist. Behind every AI solution exists an algorithm that has been trained by processing often massive amounts of data. One of the most well-known examples of this, is of course ChatGPT, which uses data from the internet to train and develop its algorithm. An AI solution doesn’t always need public datasets in order to learn. If you’re trying to create an AI that needs to provide you with relevant data, the first-party data you have collected will be its most important ingredient, as this data is unique to your situation.

There are a plethora of conceivable applications when it comes to Artificial Intelligence and using your first-party data sets as the primary source – for example:

  • Analysing and segmenting your customer base.
  • Providing relevant up- and cross-sell recommendations.
  • Predicting customer lifetime value and churning.
  • Creating personalised experiences for previous customers/visitors.
  • Identifying fraudulent behaviour by tracking suspicious activity in your own data.

These are a few examples of the many possibilities in which AI and first-party data can be leveraged as quite the dynamic duo. The coming years will definitely provide us with even more exciting new and unique applications of AI.

From raw data to marketing

Through first-party data, reaching your target audience has become even more interesting, as the type of data you have collected can grant you various outcomes. On your own website, first-party data can be leveraged to create personalised experiences, as mentioned before. Users who land on pages that are adapted to their personal interests are more likely to trigger a conversion action.

Another great way to leverage your first-party data is by using marketing automation. Through marketing automation you can target specific audiences with relevant messages. For example, an automated follow-up email that is sent three days after a visitor has downloaded a whitepaper. While email marketing remains a popular choice for marketing automation, there are other alternatives. Another quite underrated method is text messaging — not the WhatsApp variety, but traditional SMS. Others include (native) site pop-ups and sending notifications through apps, social media or other channels.

First-party data also enables targeting audiences through advertising platforms such as Google and Facebook. These channels allow you to upload the collected data, subsequently granting you immediate access to your desired target audiences within that specific platform. Moreover, you will also be able to reach audiences that share traits of your primary one. It allows for a much more creative yet relevant approach to your marketing activities, which could (very likely!) increase your chances for a higher conversion rate as a result.

Independence, innovation and trust

In the digital world, data plays an increasingly prominent role and the value of first-party data will continue to grow. Collecting and governing data over your own visitors and customers not only offers more independence and control, but also leads the way to personalised experienced and innovative applications such as Artificial Intelligence. Proper data collection and effectively leveraging first-party data will be important cornerstones for the future of data.

In conclusion, the power of data is undeniable, that much has been proven. However, let’s not forget that without consumers, we would not have any data to collect. For that specific reason, please be mindful of the consumer and be transparent with any questions regarding the processing and application of their collected data. If you manage to earn the trust of your customers, the chances are more likely that they would be willing to share their data, and of course, this relationship will always involve a balance of give and take.

The importance of data quality for a data-centric way of working

Based on a recent 2023 Statista study among loyalty customers worldwide, the top purchase drivers are: a plentiful range of products (81%), product availability (80%), data privacy policy (79%) and good customer service (77%). Evidently, having a precise understanding of the customer experience holds great significance. To effectively monitor your customers’ experiences and to make well-founded decisions, you need reliable data for information and insights. Adopting a data-centric way of working is the means to accomplish this.

Ultimately, data provides you and your teams with information and customer insights to improve customer experience. Gaining the ability to perceive your actual customers’ behaviours through data, and leveraging this knowledge to make well-informed business decisions that refine their customer journey, requires placing a 100% reliance on data Adopting a data-centric approach where everyone places their trust in customer data is essential. This article by Jesse Terstappen (OrangeValley, sponsor of this year’s Summit) delves into several prevalent risks associated with upholding data quality, the necessary prerequisites for maintaining it, such as effectively managing data privacy and compliance, and the criteria for establishing a data-centric organization.

The most common data quality risks

  1. Accidentally including personally identifiable information (PII), such as email addresses of your paying customers in the URL parameters. Violating GDPR legislation with fines and reputational damage is a result. Not very common, but comes with an extremely high risk. We spotted this once last year.
  2. Important decisions cannot be fully supported with actionable information due to a loss of analytics measurements after a development update. A gap in the data (a week or even longer) is a common practice at multiple organizations. And as you can imagine, this makes it harder to rely on long term analyses, since you will not be able to draw any conclusions based on year-to-year analyses in the same time period. We see these challenges appear on a quarterly basis.
  3. Overload of Data and Tracking Methods: With the abundance of data available today, it’s easy to become overwhelmed by the sheer volume of information. It’s important to prioritize collecting data that aligns with your organisation’s goals and KPIs. Additionally, having a streamlined approach to tracking methods can help ensure consistency and accuracy.
  4. Attribution Models and Discrepancies: Different advertising and social media platforms often have varying attribution models, leading to discrepancies in reported numbers. Establishing a standardized attribution model or understanding the differences among platforms can help mitigate confusion. Regularly auditing and cross-referencing data from different sources can also help identify inconsistencies.
  5. Lack of Consistency in Reporting & Defining of KPIs: Consistency in reporting is crucial for building trust in data-driven decisions. Having standardised reporting templates, guidelines, and protocols can help ensure that data is interpreted consistently across departments. Ideally, key performance indicators (KPIs) and tracking methods should be defined consistently across the organisation. This promotes a unified understanding of goals and ensures that everyone is working toward the same objectives. However, it’s important to allow some flexibility to adapt to specific departmental needs while maintaining alignment with the overall organizational strategy.

Mastering data privacy and compliance

Proper configuration leads to privacy and compliance. Incorrect privacy settings can cause the unintended collection and storage of sensitive data. Mastering data privacy and compliance is ensuring that privacy settings are configured and maintained correctly to comply with applicable legislation, such as the GDPR. You of course want to avoid fines and reputational damage. Using a GDPR Monitor, including a dashboard featuring real-time alerts can assist with this by automatically overseeing the presence of personal identifiable information (PII) in the data.

Who has access to what personal identifiable information (PII)?

A crucial aspect of compliance involves using user permissions accurately and appropriately managing access to GA4 data. Mishandling user permissions and access controls may lead to unauthorized access to GA4 data. It’s imperative to confine access strictly to authorized users only to prevent sensitive data from falling into the wrong hands. This helps to prevent data breaches and avoid potential legal consequences.

The importance of continuously collecting, tracking and storing valuable data

A proper analytics implementation is of course required to collect and store data to become information and finally insights. On the one hand, we see that a lot of data is labelled ‘invaluable’ due to the incorrect way the data was gathered. Knowing you only have one chance of collecting data, the way you collect data should be correct. Maintaining is another necessity. For instance, we often see that historical information is no longer available due to an incorrect setting of the retention period. Now trends can no longer be spotted and valuable analysis is no longer possible.

A 6-stage data model to handle your data
You can utilise the 6-stage data model below to ensure proper data handling, thereby acquiring valuable insights that serve as the foundation for your actions. The initial phases of this model encompass establishing and maintaining data collection, as well as managing data storage. These initial steps, involving data gathering and storage, are of utmost importance. The subsequent stages—’Transformation,’ ‘Visualisation,’ ‘Analysis,’ and even ‘Activation’ of data—rely heavily on these foundational data aspects: the proper setup and maintenance of data collection, as well as accurate data storage with appropriate retention periods.

How to maintain data quality?

Since analytics platform vendors constantly change their existing features on their platform or launch new ones, organisations often struggle with an outdated analytics set-up. This renders valuable data useless, leading to a squandering of data investments. To counteract this issue, an automated data quality monitor diligently oversees your analytics configuration, providing real-time notifications to the team when adjustments are necessary. This mechanism guarantees the upholding of stringent data quality standards at minimal expenses.

How can you minimize data loss?

With the help of a data quality monitor, you’ll be able to automatically compare trends in today’s data with those from the previous day. Comparing your day-to-day data gives you critical alerts, enabling you to identify instances where a conversion (former goal completion) has been altered due to changes on your website. Comparing day-to-day traffic data to, for example, flag tagging issues, can then be fixed directly. It also ensures the seamless flow of qualitative data into your data storage location. Subsequently, the process of transforming, visualizing and analyzing data can begin.

How to do a reliable data analysis: the famous ‘360 customer view’

First of all, in order to do a reliable data analysis, you first need to make sure events are set correctly and filters are configured accurately to ensure reliable reports. Misconfiguration can result in inaccurate data and analysis when certain events or traffic for example is excluded. This can lead to wrong conclusions, poor decision-making and missed opportunities for improvement.

Moving forward, additional focal areas of significance encompass understanding the distinctions between universal analytics and GA4, navigating the intricacies of varying reported conversion figures, and constructing attribution models.

The difference between Universal Analytics and GA4 output

We all have seen the differences between the output in Universal versus GA4 analytics. These differences result in a decrease in the data trust among our colleagues; the people we want to convince of our analytics insights. There also is a difference between the data shown in the GA4 interface and the raw data. Although Google might say that they’re showing you all the data, GA4 is not showing you 100%. The reason behind this is the focus on speed. Google wants to compete with other analytics platforms based on loading time in the interface. One technique they employed for achieving this, involves session estimation. This is based on a smaller subset of the data. This also explains the differences between UA and GA4 output.

Why do reported conversion digits on social platforms differ from those in your analytics platform?

You’ve might have noticed the differences in how conversions are attributed to paid advertising or social channels. For instance, why does TikTok report a higher count of conversions than your analytics platform? META is also an often heard name within our agency when differences in conversion reports are discussed. These discrepancies stem from the underlying business model of the advertising and social platforms. They profit from a higher number of conversions. How the conversions are assigned to the platform and why the number of conversions differ is because of attribution. Different methods are used that assign credit to various marketing channels or touch points along the conversion path. GA4 now uses three different attribution models:

  1. Standard channel group for new users: First Click
  2. Standard channel group for sessions = Last Click
  3. Standard channel group for conversions = Data Driven

Build your own attribution model

You can put your organization in control over the challenges posed by differences between attribution models by creating and managing your own attribution model. If you want to make full use of all available GA4 data, making use of BigQuery can be a viable option. Trough the integration of the BigQuery plugin, the raw data can be used. With the help of SQL, your team can reproduce the reporting options available in GA4 and even customize it. This makes it possible to define and use rule-based marketing attribution models, using logic that you own and can change. This puts you in the driver seat of attribution!

What are the conditions required to transform to a data centric organization?

While data-driven digital marketing focuses on using data as a tool, data-centric digital marketing goes a step further by viewing data itself as a valuable asset. It means seeing data as an essential business asset that is central to making decisions and developing marketing strategies. Collecting, storing and managing data is key to gaining valuable insights into customer behavior and trends. A data-centric approach is essential for organisations that want to grow and compete in a digital environment. By seeing data as a valuable asset, companies can differentiate themselves from their competitors and gain valuable insights that lead to effective marketing strategies and a better customer experience.

The four fundamental aspects for a data-centric organization

  1. Maintenance: Data quality is set-up properly and maintained constantly;
  2. Knowledge: Uniform understanding among all company stakeholders regarding tracked elements and the significance of various metrics;
  3. Application: Every employee knows how to use data whenever relevant and possible;
  4. Trust: Fostering a sense of confidence and reliance on data throughout the organization.

An instance illustrating how our client assists all stakeholders across the organisation in comprehending both the tracked elements and the various metrics is through the utilisation of a ‘KPI catalog.’ This catalog encompasses all triggers and definitions of measures presented in a comprehensible language for all stakeholders within their organisation.


Research underscores the paramount importance of data quality in a data-centric approach to business. Understanding customer preferences, driving decisions, and enhancing customer experiences depend on accurate and reliable data. Aspects such as data privacy, consistency, and proper setup play vital roles in maintaining data quality. Organisations must establish strong data fundamentals, automated quality monitoring, and reliable analytics implementation to navigate the challenges and unlock the benefits of a data-centric approach. Trust, knowledge, application, and maintenance are the cornerstones of such a transformation, enabling effective decision-making and superior customer engagement. Reliable data and employee data trust are fundamental for building thriving data-centric organisations in the future.

Author: Jesse Terstappen, Data Analyst, OrangeValley

The Shift in Online Data Collection

Our view on data and its governance has shifted over the recent years, and it is different from what we were used to, with privacy being the main driver of this shift. The time of placing cookies and saving (personal) data without clear communication and consent is over. The user and their privacy have been top priority, and this has to be respected. The many  changes that are currently taking place in the digital landscape have made this apparent.

This article is written by Steven van Eck, Data & Analytics Manager at SDIM Online Marketing. SDIM is sponsor of the DDMA Digital Analytics Summit, which recently (October 13, 2022) took place in B. Amsterdam. 

Besides new legislation and guidelines on privacy, new technical changes derive from this issue as well. One of the biggest changes in the digital landscape is the impending disappearance of third-party cookies and the set limitations of first-party cookies. The outcome of these developments and its consequences have been dubbed as the ‘Cookieless Era’ among digital marketers and analysts. This Cookieless Era has forced us to become not only critical of the way we have been collecting data, but also of the kind of data that we collect.

The aforementioned developments have led to a variety of changes in the market and have presented a multitude of changes that are yet to come. Our ‘need’ to collect as much (revelant) data won’t disappear, it’s just the rules that are now different. Logically, shifting the rules also requires its players to adapt different behaviors while simultaneously ensuring a minimized risk of ‘unwelcome’ consequences.

It is crucial that we remain mindful of this changing landscape by adopting critical mindsets. This will be the absolute key in order to stay ahead. Thus, this article will discuss a number of important topics that overlap with these developments.

Server Side Tracking

The current method for collecting data involves placing and firing a script that is placed on the website from the browser of a user or visitor (Client Side). Despite it being an industry standard, it does involve numerous drawbacks. One includes its sensitivity to adblockers: a user that has installed an adblocker in their browser can easily prevent their data from being shared with parties such as Google or Facebook. This limits the amount of data an online advertiser can collect. Another big constraint of Client Side tracking is the lack of control over your data and what information is being sent to third parties. Which makes sense, as you are placing a third party script on a website, and how often do we fully comprehend its functionality? Do we always know exactly what data is being collected?

A popular alternative that has been (and currently is) making waves, amongst both smaller and larger parties, is Server Side Tracking. With this new technique, a script is no longer fired from the user’s browser, but from an external server (which is governed by the website’s owner). This allows for both faster site speed and the circumvention of adblockers. Furthermore, because the data is being collected from a personally owned server, you gain full control over the tags you fire and the data you collect. You even prevent third parties from collecting data on your website without your permission, which could occur when placing third-party scripts Client Side. Data collected might include personal user data that is shared while filling in a digital contact form for example. Server Side Tracking would prevent this issue, in other words: you are in full control of your data and its collection.

From Third to First-Party Data

With the disappearance of third-party data, the need to collect first-party data naturally grows as it remains an invaluable asset to any kind of business. There currently are a number of browsers that already block third-party cookies by default, with Safari and Firefox being prime examples. Nevertheless, the browser with the biggest market share, Google Chrome, still allows the use of third-party cookies and has said it will continue to do so until 2024. Once they block third-party cookies by default, the ‘Cookieless Era’ will officially begin.

As mentioned before, our focus will shift from third- to first-party data. Creative solutions to tracking and data governance will become incredibly important in order to maintain a steady flow of data collection. We have seen quite some different solutions already, mainly in the shape of using email addresses. These include:

  • Log-in screen to read content
  • Newsletter subscriptions
  • Downloads of white papers or brochures
  • Quizzes that use email addresses to share results
  • Discounts in exchange for email addresses

Another solution is the collection of data first-party in order to use it for different purposes. These include:

  • Marketing automation: send personally customized newsletters to potential leads based on their position in the customer journey.
  • Website personalization: show personalized content depending on the profile of the visitor in question.
  • Advertising channels: email addresses can also be uploaded to platforms such as Google and Facebook for the purpose of advertising. This data could then be used for remarketing purposes or to reach similar user profiles.

An important thing to keep in mind is that with both collecting and using first-party data you need to be fully transparent towards users about your conduct. They have to be aware that their data is collected and used for other purposes, so that they can decide whether they agree or disagree with this.

Developments in Web Analytics

All current and upcoming developments also influence web analytics. Web analytics platforms mostly use (first-party) cookies to recognize visitors. Shortening the cookie lifespan will have a significant impact on the results we see in tools such as Google Analytics. If visitors cannot, or for a short period of time, be recognized, it will be increasingly difficult to map out a complete customer journey. Moreover, attributing traffic sources and channels won’t be possible in the same way that we are (or were) used to.

To fill these gaps in data, new techniques are used to maintain a similar way of collecting web statistics. For example, the necessary applications of machine learning and data modeling are becoming increasingly common. A prime example of this would be Google’s new Analytics platform, Google Analytics 4 (GA4). This new generation of Google Analytics has integrated said techniques to combat issues such as the loss of data, which was necessary as Google Analytics remains a market standard despite its numerous adversities this year alone.

With its machine learning algorithms, GA4 contributes to developing and recognizing important insights and trends. For example, when a decline in visitors from a specific campaign occurs, GA4 could potentially present this to you as one of its Insights. Moreover, it has predictive metrics that can calculate the expected revenue your visitors generate. This data can subsequently be used in your Google Ads campaigns to target users that potentially generate a higher revenue.

Machine learning also contributes to data modeling. For example, when users deny consent for analytics-cookies, that data will be unavailable to you, resulting in these so-called ‘data gaps’. In order to close these gaps, data modeling is applied by using data from comparable visitors that did consent to analytics-cookies. This way, you respect the agency of your visitors while simultaneously preventing you from having to deal with a loss of data.

The possibilities regarding machine learning are still quite minimal, as these functionalities are relatively new. We expect that developments regarding these functionalities will be introduced in the coming years.

Big Ad-tech developments

Big Ad-tech parties can actually thank their existence to data and, in today’s environment, are primarily concerned with new developments concerning data collection. This resulted in many new functionalities that support collecting more relevant (conversion)data and some examples of these new functionalities are:

  • Facebook Conversion API: a new technique developed by Facebook to collect and send data to Facebook. Data is sent from server to server without being reliant on client side tracking, which is becoming more difficult due to the disappearance of third-party cookies and the limitations applied to first-party cookies. With this server-side solution you can circumvent these obstacles. Moreover, you have more control over the data you sent from server to server.
  • Google Enhanced Conversions: with Enhanced Conversions from Google you include additional user data when registering conversions, for example, a user’s email address or home address. The data is then sent in hashed form to Google to ensure user privacy. This additional data will assist in matching a visitor to an existing Google account that previously had interaction with an ad. It allows for extra conversions to be attributed to users who use multiple devices.
  • Google Consent Mode: this new functionality from Google is used to collect information regarding the cookie-consent provided by the user, respecting their choice whether they want cookies to be placed or not. For example, if a user were to refuse giving consent, only a limited amount of data will be collected from their visit – data that will not require cookies. With the use of Google Consent Mode you respect the user’s agency while simultaneously allowing you to collect ‘cookieless’ data.

We expect that the development of new data- and advertising-related technologies will be ongoing in the coming years. Furthermore, they will surely play a big role in the online landscape.

Respect the User

One thing is certain: the past months have been everything but boring and we have many more exciting things to explore in the (near) future. Many new data- and privacy-related developments await us, as data will remain a vital part of our daily activities, even if we have to abide by the rules. These rules are set for a reason, despite how important data is, the security and privacy of our visitors need to be respected. Always be clear and transparent when it comes to data, what information is gathered from them and the potential application of this data for other purposes.

Don’t focus too much on what you won’t be able to track, but focus on the new possibilities, advancements and technologies that allow us to grow even more. This is a unique opportunity that is presented to us, one with many challenges. But, instead of running away, we should face them head on, as there are many exciting things waiting for us in our data-driven world.

About the author: Steven van Eck is Data & Analytics Manager at SDIM Online Marketing.