From Decoration to Direction: Why Analytics Only Matters When It Drives Action

We often talk about data as a source of truth, a strategic asset, or the new oil. But according to Rasmussen, that narrative misses the point. “The biggest myth in analytics is that more data equals more success. It doesn’t,” they say. “Success only comes when data leads to action.”

The myth of more

In countless boardrooms and dashboards, Rasmussen has seen the same issue repeat itself: companies mistaking collection for value. “Having data isn’t the same as using data. It’s like owning a library and thinking the books on the shelf make you wise. They don’t — unless you read them, connect them, and let them influence your thinking.”

That mindset is more common than we think. Dashboards overflow with metrics, yet decisions remain gut-driven or politically motivated. “We decorate rooms with charts,” Rasmussen says, “but rarely do we let those charts shape the decisions themselves.”

The circus of analytics — and what AI might do about it

With AI and automation evolving rapidly, the analytics world is entering a new phase. But does that mean clarity — or more chaos?

“Right now, analytics feels like a circus,” Rasmussen says. “Flashing lights, too many acts at once, and clowns competing for attention.” AI, they argue, won’t fix that by default. “If you chase hype without strategy, AI just adds more performers to an already crowded stage. But if you invest thoughtfully, it can help cut through the noise.”

In short: AI won’t solve confusion — but it might help you manage it, if you know what you’re doing.

“The question isn’t whether the circus will change,” Rasmussen adds.
“It’s whether you’ll be the ringmaster — or just another distracted spectator.”

From data librarians to strategy architects

So what does this mean for the role of the analyst?

“Analysts will become more critical than ever,” Rasmussen says, “not for gathering data, but for interpreting it and using it to drive real decisions.”

In many organizations today, data is still used as wallpaper — something to justify decisions that have already been made. “That needs to flip,” they argue. “Analysts should become architects. They help build the strategy itself, not just report on it.”

That requires trust. It means analysts need to be at the table when decisions are made, not just afterwards with a pretty slide deck. The future belongs to companies that recognize this shift — and act on it.

A compass, not a vending machine

For leaders frustrated with the ROI of their analytics investments, Rasmussen offers one clear piece of advice:

“Define what you want to achieve before you invest. Don’t buy the shiny tool first and only then ask where you’re trying to go.”

Analytics isn’t a magic machine that turns data into money. It’s a compass. “But a compass is only useful if you have a destination. Without that, you’re just pedaling harder without knowing where you’re headed.”

See Steen Rasmussen live at the Digital Analytics Summit 2025 on 9 October in B. Amsterdam at 9.25-10.10 in THE LOUIS.

Author
Steen Rasmussen

Data with Purpose: Why First-Party Data Is Crucial for NGOs

In a data-driven world, NGOs face a different reality than commercial organisations. Instead of selling products or services, they build communities, foster trust, and strive for long-term social impact. But how do data, privacy, and digital innovation play a role in this mission?

Ahead of their session at the Digital Analytics Summit 2025, we spoke with Marinio Palatta and Daan Eenkhoorn from Longfonds, a leading Dutch health NGO focused on lung disease, about the unique challenges NGOs face – and why first-party data is becoming a strategic asset for the sector.

Trust Before Transactions

“NGOs operate in an environment defined by trust and mission rather than profit and transactions,” Longfonds explains. “That means that, beyond legal compliance, we carry a moral responsibility to protect the privacy of supporters, patients, and volunteers.”

This responsibility comes with specific limitations. Unlike commercial organisations, health-focused NGOs deal with highly sensitive data – and face advertising restrictions that often apply equally to non-profits. Campaigns about public health, such as clean air or smoking prevention, are sometimes flagged as sensitive content, making it harder to reach the right audiences via standard digital advertising tools.

“In this context, having a strong, reliable first-party data foundation isn’t just a ‘nice-to-have’ — it’s a necessity. It allows us to communicate directly, meaningfully and compliantly.”

A Changing Donor Landscape

At the same time, donor behaviour is shifting. Where NGOs once relied on a steady base of recurring supporters, today’s landscape is far more diverse.

“People still want to give, but they want to do it in their own way,” Longfonds says. “We now see loyal monthly donors, one-time givers, project-based supporters, and even people including NGOs in their wills.”

Recent research confirms this: according to Donating in the Netherlands: Trends 2025, project-specific donations are growing, while structural giving is under pressure. Younger donors prefer QR codes and digital tools; older generations stick to direct debit. Meanwhile, average gift size is decreasing, and retention is becoming harder to maintain.

“Understanding these patterns is only possible when you own your data and track engagement at the individual level – in a respectful, transparent way.”

The Strategic Rise of First-Party Data

Looking ahead, Longfonds sees first-party data becoming more than a compliance tool — it’s a strategic driver of innovation and resilience.

“Control over your own data means independence from third-party platforms, more reliable insights, and better continuity,” they explain. “It also enables responsible experimentation with AI and personalization, as long as the data is clean, consented, and well-structured.”

In health-related sectors especially, first-party data opens the door to delivering the right content at the right moment — for example, sending tailored information to people living with specific lung conditions, if they’ve opted in.

It also fuels community-building strategies. “We’re not just looking at donations. We want to know how people connect, participate, and contribute — whether that’s joining an event, reading our updates, or helping spread awareness. That kind of engagement is equally valuable.”

Where to Start: From Vision to Pilot

For NGOs that want to become more data-driven but feel overwhelmed, Longfonds offers this advice: Don’t start with technology — start with a focused question.

“Ask yourself: What’s one thing we wish we could do with data, that we currently can’t? And why would that matter to our donors or beneficiaries?”

Once that question is clear, it becomes easier to build a small pilot, test the impact, and scale gradually. External expertise can help along the way, but what matters most is clarity of purpose.

“Working with data should support your mission, not distract from it. When done well, it helps you understand your audiences, strengthen relationships, and improve your outcomes — all while staying close to your values.”

Authors
Marinio Palatta & Daan Eenkhoorn

How do you talk numbers? Why Most Reporting Meetings Fail

Tim CeuppensTim Ceuppens at the Digital Analytics Summit 2025

Every analyst has been in one: a reporting meeting that runs on numbers but goes nowhere.

For Tim Ceuppens, that’s not just a waste of time — it’s a missed opportunity to drive real impact.

“Most reporting meetings are just there to inform,” Tim says. “We spent this, we made that, here’s what we’re working on. If you’re lucky, you get a quarterly update with suggestions — but even then, the overload of information tends to block good decision-making.”

At the Digital Analytics Summit 2025, Tim shares his take on why so many meetings fail to land, and what analysts can do differently to make their insights truly resonate — especially with non-technical stakeholders.

Mismatched worlds, missed connections

One core issue? Everyone’s speaking a different language.

“A Google Ads rep lives in click-through rates, an SEO in rankings, a branding agency in reach. No one connects those metrics to how the business earns money. That disconnect kills impact.”

Tim suggests a shift in mindset, starting with the people in the room.

“Quick tip: ask your stakeholders what their bonuses are tied to. Then find where your world intersects with those goals. That’s where you can deliver meaningful data that improves decisions.”

Too many numbers, too little value

Another common trap: presenting too much data, too fast.

“One time I briefed an account manager on how to present to the executive committee. I said: keep it simple — one KPI per slide. Don’t drown them in numbers.”

But just before the presentation, the agency head panicked — and added slides full of dense matrices.

“It backfired. The non-technical people zoned out. The number nerds fixated on irrelevant details. Nobody listened.”

Why the change?

“They didn’t feel like one number per slide proved their value. But flooding the audience with metrics only undermines that.”

What to break: bad habits in data presentation

So what’s the first habit analysts should let go of?

“Stop showing the same numbers to people outside your team that you’d share internally,” Tim says. “Ask yourself: what does this person or team need to make a better decision?”

The key is clarity over completeness.

“Do they need CTRs by keyword, or just a sense of CPA performance by campaign? Show the minimum you need to make your point. That’s the maximum you should present.”

And always be ready — but selective.

“Sure, have a printout with detailed numbers in your back pocket. But they don’t all need to be on the screen.”

Analysts and AI: augmentation, not replacement

With AI evolving rapidly, some fear reporting will become fully automated. Tim isn’t one of them.

“The reports of the death of analysts have been greatly exaggerated,” he says. “Not just because AI is still bad at math — but because context matters. Analysts bring strategy and nuance.”

That doesn’t mean AI won’t change the work — far from it.

“AI will make our jobs easier and more efficient. Like what spreadsheets did for bookkeepers.”

He’s especially excited about using AI to tailor reporting to stakeholder profiles.

“Helping people evaluate options and make decisions faster — that’s a huge leap forward compared to the way we run reporting meetings today.”

Want to rethink your reporting?

Catch Tim Ceuppens live at the Digital Analytics Summit 2025 on October 9 in B. Amsterdam.

🎟️ Get your ticket now

GA4: Brilliant and Broken – Doug Hall’s Honest Take

Doug Hall (Duga Digital) doesn’t do sugarcoating. Nor does he do blind negativity. When it comes to Google Analytics 4, he’s what you might call a pragmatic optimist. “I celebrate the cool stuff GA4 does better than Universal Analytics,” he says. “And I embrace the opportunity to solve what’s still missing. What frustrates me? The constant whining.” 

At the Digital Analytics Summit 2025, Doug brings a talk that’s part critical assessment, part troubleshooting guide — and part rallying cry to digital analysts: stop complaining and start building. 

Pragmatic optimism, not platform loyalty 

For Doug, GA4 isn’t about love or loyalty, it’s about what works. 

“Don’t drop a tool or platform because of religious arguments,” he says. “Be an engineer. Be a scientist. Don’t let brand loyalty  colour a technical decision.” 

Yes, GA4 has issues. But it also has genuine improvements. Doug lists annotations, configurable menus, and data transfer capabilities as big wins over the old Universal Analytics setup. 

Still, there are some gaps that sting. “Property-level filters are increasingly being missed,” he says. “I want to be able to filter based on a GTM container ID, for example. Protecting against bots, spam, and misconfigured data collection is fundamental to data quality. You can do this to a degree in Server-Side GTM — but it’s better done in the property config.” 

Don’t complain, contribute 

Doug has a sharp take on the typical “GA4 rant” you might see on social media. 

“Don’t just bitch and moan that GA doesn’t do X, Y or Z and damnit, the lousy GA team better do this one feature for me or I’ll say bad things on LinkedIn,” he says with a grin. 

Instead, his advice – especially for newer analysts – is to give feedback that matters. 

“Explain what you’re doing. Give rich context. Show how your clients need to do something with GA data. Explain the business value. Explain how you solve it, and what business function it enables. Always explain the value — ideally at scale — for agencies, clients and Google.” 

That, he argues, is how you help shape the product. 

The future: AI, privacy… and measuring what matters 

Doug doesn’t shy away from the broader shifts reshaping analytics: AI and privacy-first marketing. 

“Privacy is a thing and it’s here to stay,” he says. “But currently, it’s sub-par. The only group of stakeholders to get value out of GDPR so far is lawyers.” 

His verdict? “It’s well founded on the right principles, but the execution is rubbish.” 

So what’s the better direction? “Privacy settings at a user level that are transparently applied without friction,” he says. 

When it comes to AI, Doug sees real promise – especially in agentic AI: systems that act on behalf of users, not just answer questions. “AI as a companion is the right direction,” he says. “It won’t replace humans. Human input is mandatory for high-end functions.” 

He even suggests we might need a shift in what we measure. 

“We have web and app streams in GA,” he notes. “But what happens when your Shopify store is selling to an AI agent via an MCP server? You need to be able to measure the agentic performance.” 

And here comes the punchline: “Marketers need to stop measuring users — PLEASE stop it already! You invest in campaigns, not users. So measure the effectiveness of the investment. Better for business, better for privacy.” 

Want the honest story on GA4? 

If you’re tired of rants but still frustrated with GA4, Doug’s session offers something rare: clarity without complaint. He’ll show what’s working, what’s broken, and how to move forward. Not just as a user, but as a contributor to the evolution of digital analytics. 

Catch Doug Hall live at the Digital Analytics Summit 2025 on October 9 in B. Amsterdam. Get your ticket now 

From Talk to Action: How MCP Servers Make AI Actually Useful

AI may be everywhere, but in digital analytics, it often feels more like a novelty than a working tool. For Gunnar Griese, that’s not a failure of the models themselves, it’s a failure of integration. 

“LLMs are great at talking,” he says, “but they’re isolated from the tools we actually use. That’s exactly what MCP servers fix.” 

On October 9, Gunnar will take the stage at the Digital Analytics Summit 2025 to explain how MCP servers (Model Context Protocol servers) are quietly turning AI into a powerful execution partner. One that doesn’t just suggest what to do, but actually does it. 

What are MCP servers  and why do they matter? 

“While LLMs are great at generating text responses, they’ve been isolated from the tools we actually use in our work,” Gunnar explains. “MCP servers act as intermediaries that enable AI applications not only to discuss what should be done, but also to actually do it.” 

In short: rather than having to manually copy-paste code or AI suggestions, analysts can now build workflows where AI “understands” the setup and implements changes within analytics platforms directly. It’s a shift from passive suggestion to intelligent action. 

From assistant to execution partner 

Gunnar has used tools like ChatGPT and Claude to solve coding problems and work faster, but, until recently, always stayed in charge of execution. That’s changing. 

“With MCP servers, we can start outsourcing the laborious, time-consuming technical tasks to AI agents,” he says, “while maintaining human oversight for strategic decisions.” 

The benefit? Analysts can spend less time configuring tags or debugging setups, and more time thinking critically about measurement strategy, business context and what the data actually means. 

Automation meets human oversight 

Will AI replace digital analysts? Not according to Gunnar. Instead, he sees a division of labor emerging: AI handles the mechanical implementation, while human experts take care of strategic guidance, interpretation and quality control. 

“AI can execute tasks efficiently,” he says, “but it won’t replace the strategic thinking, business context understanding, and quality assurance that experienced analysts bring.” 

This becomes even more crucial as privacy regulations tighten. Automation must go hand-in-hand with ethical data handling and clear oversight. This is a task that still firmly belongs to people. 

So how do you start? 

“Don’t overcomplicate it,” Gunnar advises. “Start with one repetitive, annoying task in your workflow. Something technical and time-consuming. That’s your entry point.” 

Many building blocks are already available. Try the GTM MCP server for tag management, or the Playwright server for automating browser tasks. Hook these into an AI interface of your choice, and start experimenting. 

“Don’t expect it to work perfectly the first time. My proof-of-concept needed multiple iterations. But that’s the point: you’re building something new. The goal isn’t to replace your expertise; it’s to let you use it where it matters most.” 

The future is now 

MCP servers might sound futuristic, but they’re already here, and the people experimenting now will be tomorrow’s leaders. 

“We’re entering a new phase,” Gunnar says. “If you’re still doing everything manually while your competitors are automating with AI, you’re going to fall behind.” 

Ready to see how it works? 

🧠 Catch Gunnar Griese live at the Digital Analytics Summit 2025, on October 9 in B. Amsterdam. 

🎟️ Explore the program and get your ticket 

Look-alike Modeling: A Comprehensive Guide to Customer Acquisition

Effective targeting is essential for maximizing return on analytics and advertising investments. Look-alike modeling, a data-driven approach, can increase ROI by up to 30% by identifying high-value prospects more accurately than traditional methods. By analyzing customer data and finding individuals who share similar characteristics with existing customers, brands can achieve higher click-through rates and greater campaign efficiency. Without look-alike modeling, advertisers may miss out on reaching a 20-25% broader relevant audience, resulting in lower engagement and reduced conversion rates.

About the author: Guus Rutten is Managing Director Data Services at GX (part of Happy Horizon). GX is one of this year’s sponsors of the DDMA Digital Analytics Summit.

 

Understanding Look-alike Modeling

Look-alike modeling leverages data science techniques to create segments of potential customers who exhibit behaviors and attributes similar to your target audience. By analyzing the data of your most valuable customers, the model can identify individuals with a high probability of converting.

Key Factors for Success

The quality of seed data is critical to the success of look-alike modeling, as the results of the model are heavily dependent on the data used. The following factors collectively ensure the integrity of the model:

  •  Accuracy: Make sure the segment closely aligns with the traits and behaviors of the intended audience. This requires confirming that the people within the seed segment share the relevant characteristics of the target group.
  • Completeness: Include a comprehensive set of relevant attributes to capture the nuances of the target audience. This may involve considering demographic factors, purchase history, website behavior, and other relevant data points.
  • Consistency: Maintain data integrity throughout the modeling process to ensure that the data is consistent and free from errors. This involves verifying data accuracy, resolving inconsistencies, and addressing missing values.
  • Timeliness: Use up-to-date data to reflect the latest trends and changes in customer behavior. Outdated data can lead to inaccurate models and ineffective targeting.
  • Validity: Ensure that the data is reliable and relevant to the modeling objectives. This involves verifying the data source, assessing data quality, and considering the potential biases or limitations of the data.
  • Uniqueness: Prevent duplicate profiles from skewing the model by ensuring that each individual is represented only once in the seed segment. This helps to avoid overrepresentation of certain individuals and maintain a balanced representation of the target audience.

Without these critical components in place, the modeling process cannot proceed effectively, as inaccurate or incomplete data would lead to poor results — a classic case of “garbage in, garbage out.” Therefore, ensuring the quality of the seed data is a foundational step in creating a successful look-alike model.

The Look-alike Modeling Process

Look-alike modeling is a powerful technique for identifying high-value prospects by finding individuals who share similar characteristics with your existing customers. This process involves several steps that leverage data science to create targeted segments. While data plays a crucial role, this approach makes it feasible to implement look-alike modeling within a relatively short timeframe, regardless of business size, industry, or digital maturity.

  • Step 1: Data Mining: Gather and assess the necessary data from various sources, including first-party data (e.g., CRM, CDP, website analytics) and potentially third-party data (e.g., demographic information, purchase history). Identify the specific data required for look-alike modeling and determine its availability within existing systems or external sources. Clearly define the target segment to focus the data collection and analysis process.
  • Step 2: Business & Data Understanding: In addition to analyzing the data’s value and identifying potential anomalies, it’s crucial to incorporate business logic to ensure that the model is both logical and accurate. This includes gaining a deep understanding of each data point’s significance, how it relates to the target audience, and ensuring alignment with real-world business dynamics. The human factor plays a key role here, as business insights help verify the model’s correctness and relevance.
  • Step 3: Fuzzy Matching: Employ fuzzy matching techniques to identify and exclude duplicate profiles, ensuring that each individual is represented only once in the seed segment. This helps to avoid overrepresentation of certain individuals and maintain a balanced representation of the target audience.
  • Step 4: Data Exploration: Clean and transform the data as needed to prepare it for modeling. This may involve handling missing values, normalizing data, and creating new features. The data provided can be numerical or categorical; either way, the model can only use numerical data.
  • Step 5: Modeling: Build and train the look-alike model using a suitable machine learning algorithm. Consider factors such as the size of the dataset, the complexity of the relationships between features and the target variable, and the desired level of interpretability when selecting the algorithm.
  • Step 6: Evaluation: Assess the model’s performance using appropriate metrics, such as accuracy, precision, recall, and F1-score. Continuously evaluate and refine the model to ensure its effectiveness and identify areas for improvement.

Benefits of Look-alike Modeling

Look-alike modeling, a strategic approach to customer acquisition, offers businesses a multitude of benefits. By leveraging data-driven insights, this technique enables marketers to enhance targeting precision, improve return on investment, attract new customers, and personalize customer experiences.

  • One of the key advantages of look-alike modeling is its ability to reach highly relevant audiences with greater accuracy. By identifying individuals who share similar characteristics with existing customers, businesses can tailor their marketing efforts to those most likely to convert. This precision not only reduces wasted advertising spend but also increases the efficiency of marketing campaigns.
  • Moreover, look-alike modeling can improve ROI by optimizing marketing budgets and driving higher conversions. By focusing on high-value prospects, businesses can reduce acquisition costs and increase revenue. Additionally, the insights gained from look-alike modeling can be used to refine marketing strategies and tailor campaigns to specific audience segments, further enhancing ROI.
  • Another significant benefit of look-alike modeling is its ability to attract new customers. By identifying individuals who share similar characteristics with existing customers, businesses can expand their reach and acquire new customers who are more likely to be a good fit for their products or services. This can lead to long-term loyalty, increased profitability, and sustainable growth.
  • Furthermore, look-alike modeling provides valuable insights into customer behavior and preferences. By analyzing the data of existing customers and identifying common patterns, businesses can gain a deeper understanding of their target audience. These insights can be used to refine marketing strategies, tailor messaging, and personalize customer experiences.
  • Finally, look-alike modeling enables businesses to deliver personalized experiences that resonate with individual customers. By understanding customer preferences and tailoring messaging and offers accordingly, businesses can increase engagement, build stronger relationships, and drive higher conversion rates.

Challenges and Considerations in Look-Alike Modeling

While look-alike modeling offers significant benefits, it is essential to be aware of the potential challenges and considerations involved to ensure its effectiveness.

Data Quality: The integrity, completeness, and relevance of the data used in look-alike modeling are essential. It’s important to ensure that the data is accurate, comprehensive, and truly representative of the target audience. Errors, missing data, or biases can result in flawed models and poor targeting outcomes.

Model Complexity: Avoid overfitting the model, which occurs when the model becomes too complex and fits the training data too closely, potentially hindering its ability to generalize to new data. Carefully select the model complexity and tune its hyperparameters to strike a balance between underfitting and overfitting.

Ethical Considerations: Address privacy concerns and ensure fair and unbiased targeting. Comply with relevant data protection regulations and obtain necessary consents. Avoid discriminatory targeting based on protected characteristics and ensure that look-alike models are not used to perpetuate biases or stereotypes.

Dynamic Nature of Audiences: Recognize that customer behavior and preferences can change over time. Regularly update and retrain the look-alike model to incorporate new data and adapt to evolving trends. This ensures that the model remains relevant and effective in identifying high-value prospects.

By carefully addressing these challenges and considerations, businesses can maximize the benefits of look-alike modeling and leverage its power to drive customer acquisition and growth.

Conclusion

Look-alike modeling is a powerful asset for businesses seeking to optimize customer acquisition through digital analytics. By leveraging data-driven insights and advanced analytics, businesses can create precision-targeted campaigns that resonate with their ideal audience, leading to improved ROI, higher acquisition rates, and more personalized customer experiences. Effectively utilizing look-alike modeling enables businesses to meet their marketing objectives and thrive in today’s competitive, data-centric landscape, driving growth and long-term success.

Would you like to know more about how an organization can effectively implement and optimize look-alike modeling to ensure optimal performance and achieving marketing objectives. Check out this case from Roularta.

Digital Analytics in transition: Privacy, tooling and future trends

In today’s digital analytics landscape, analysts are navigating a dynamic and challenging terrain. Despite these formidable challenges, the role of analysts has never been more critical. We must rise to the occasion, providing strategic recommendations to our businesses and executing these recommendations with precision. In this interview with Marie Fenner and Louis-Marie Guérif (Piano), we delve into the crucial developments shaping the digital analytics field and explore the steps necessary to ensure data privacy assurance, common errors to avoid, and the future direction of analytics tools.

Marie Fenner is the Global SVP for Analytics, and Louis-Marie Guérif is the Group DPO & Sustainability Manager at Piano (the sponsor of the 2023 DDMA Digital Analytics Summit on October 12th).

There is a lot happening within the digital analytics field. What are some significant developments that are currently playing a pivotal role in shaping this field?

‘Never have we faced such turbulence in the analytics market as we do now with the triple whammy effect – strict data privacy enforcement, sunset of Universal Analytics and sunset of third-party cookies. We, as analysts, must find a way of protecting our brands from any enforcement action and financial fines, embark on a major migration to GA4 or alternative tools and find reliable ways to attribute the success of our marketing campaigns.

Also, there’s a lot happening at NOYB. They have started making official complaints about the companies who are not following the GDPR guidelines on their mobile apps tracking. Although the new framework to transfer data between the EU and the US has been announced, there is a great deal of uncertainty as NOYB plans their legal action against this new framework (expect Schrems III!).

Finally, despite ‘crying wolf twice’, Google will deprecate third-party cookies next year.

Daunting though it may sound, analysts’ position should be elevated to take on these challenges, make the right recommendations to the business, and execute these recommendations.’

When examining data privacy, what steps must we take to ensure its assurance? Which factors come into play, and over which ones we’re able to exert influence?

‘When considering data privacy, it’s essential to take specific steps to ensure its assurance. Several factors come into play, and we have the ability to influence some of them.

To begin with, I think we should be focusing more on Privacy by Design principles. Data protection is fundamentally a risk-based approach. It’s good that companies are increasingly conducting Data Protection Assessments or Audits to evaluate the reliability of the chosen Analytics solutions in relation to their risk acceptance criteria.

Transparency is another key aspect. Being forthright about how data will be used, where it will be stored, and how end-users’ rights will be upheld is very important. Additionally, committing to a well-defined and purpose-limited Data Processing Agreement is a key step in ensuring data privacy.

In summary, safeguarding data privacy involves a multifaceted approach that includes risk assessment, transparent practices, and clear contractual commitments. It’s imperative for organizations to proactively address these aspects to protect sensitive information and build trust with their stakeholders.’

What are the common errors made when striving to prioritize privacy? Is there something that everyone should refrain from doing immediately?

‘When it comes to prioritizing privacy, it’s crucial to be aware of some common misconceptions and pitfalls that organizations often encounter. Avoiding these errors is essential for ensuring that your privacy efforts are effective and compliant with regulations. Some key points to keep in mind:

  1. Misunderstanding PII vs. Personal Data: One common mistake is assuming that Personally Identifiable Information (PII) is equivalent to the definition of Personal Data within the GDPR. While there is overlap, PII and GDPR-defined Personal Data may not always align perfectly. It’s important to understand the nuanced differences to ensure accurate compliance.
  2. Confusing pseudonymization with anonymization: pseudonymization is not the same as anonymization. Pseudonymization involves replacing or masking identifying information but still allows for potential re-identification in some cases. Anonymization, on the other hand, makes it practically impossible to identify individuals from the data. Recognizing this distinction is vital for safeguarding privacy effectively.
  3. Not Relying Solely on the DPF List: It isn’t guaranteed that a provider in the Data Processing Framework (DPF) always complies with international data transfer requirements under the GDPR. It’s important to remember that there are countries outside the EU (apart from the USA), and that data transfer compliance involves more than just the provider’s location. Always conduct a thorough assessment to ensure compliance.

Also, just because a provider is listed in the DPF list doesn’t automatically make them GDPR compliant. Data transfer is just one aspect of GDPR compliance. You must consider various other factors, including data processing, security measures, and consent management, to ensure comprehensive compliance.’

Concerning performance, what are the current limitations of analytics tools, and where do you envision the field heading in the future?

‘Legacy tools ‘trained’ us to live by their rules – flawed data quality, fixed data models, a limited number of dimensions and metrics, biased attribution models, and the chronic lack of direct dialogue between the software vendor and the customers. The move to the ‘event-based model’ is a great idea but not at the expense of visit and user-based analysis.

The sunsetting of Universal Analytics and privacy compliance is triggering new thinking – what if I can start afresh, instead of continuing to patch things up? Many companies have made a brave move to move away from Google Analytics here in Europe and have never looked back. The digital maturity has led many companies to build their own data pipeline and data products to improve their digital efficiencies. But this is not for the faint-hearted and requires a substantial investment both in technical infrastructure and human capital. We expect this trend to continue.

AI will have a fundamental effect on our industry. More and more companies will embed AI in their customer journey, and they will expect the same in analytics tools. It will go beyond machine-learning-based anomaly and prediction and into NLP-based analysis. Watch this space.’

Interview Matteo Zambon (Tag Manager Italia): is sGTM feasible for every organization?

In this exclusive interview, we have the privilege of speaking with a true trailblazer in the realm of digital analytics, Matteo Zambon. A pioneer who has shaped the Italian landscape of Google Tag Manager (GTM) and Google Analytics. He is not your typical academic expert; instead, he embarked on a unique journey of self-discovery and community-driven knowledge sharing, winning important analytics prizes like the Golden Punchard and the Quanties Awards along the way. And… fortunately for us, he will be speaking at the upcoming DDMA Digital Analytics Summit on October 12th.

Come check out Matteo Zambon’s Talk at the DDMA Digital Analytics 2023 on October 21th. Tickets available at: shop.digitalanalyticssummit.nl.

Can you briefly introduce yourself? Who are you, and what do you do?

I’ve never followed an academic digital analytics path. I preferred to trace my own path from the very beginning, for the sake of community knowledge, the desire for constant improvement and because of my burning passion, in particular, for Google Tag Manager and Google Analytics, and in general for the entire digital analytics world.

As a matter of fact, I’ve been the first expert in Italy to divulge, since 2015, the importance of Google Tag Manager and how to use this tool to improve the business and marketing campaigns’ performances of web marketing professionals and entrepreneurs. Besides that, I’m an official Google Tag Manager Beta Tester and Alpha Tester of Google Analytics 4.

Also, I co-founded and run Tag Manager Italia, one of the top digital analytics agencies in Italy, which is organized in 3 vertical business units – Consulting, Education, and R&D -, along with my “dream team” made up of more than 25 experts and professionals.

SuperWeek, Measurecamp Europe, Measurecamp UK, Measurecamp North America, ADworld Experience, Web Marketing Festival, SMXL (and MeasureSUmmit, of course) are just some the international events where I held workshops and educational speeches.

To what extent is sGTM truly the solution for every organization, especially considering capacity, investments, and feasibility? Is it achievable for everyone?

I believe that sGTM is the most accessible, scalable, and effective solution for any company managing advertising campaigns and having digital business assets (websites, e-commerce, marketplaces, social media, etc.). Of course, provided that the company wishes to implement significantly more profitable and efficient business and marketing strategies through the collection and use of precise and timely data 🙂

Jokes aside, many companies have recognized that sGTM is a system that is hard to match, as it can integrate more quickly, simply, and effectively into any organization (from SMEs to non-profit organizations to multinational corporations) compared to most other suites on the market. In this regard, I would like to emphasize the significant cost-opportunity advantage and simplicity that the GTM ecosystem and especially sGTM put in the hands of digital analysts, marketing managers and advertising specialists.

First and foremost, thanks to the integration with third-party systems for Server-Side system management, the creation of tracking systems implemented via GTM not only becomes much simpler and faster, but the quality and quantity of the collected data skyrockets. Costs are affordable for companies of any size, and tracking is carried out in full compliance with the GDPR regulations in force.

In my opinion, the only differentiating factor in choosing sGTM as a centralized tracking system is whether the company in question intends to fully leverage its digital assets, grow, and optimize its campaigns and marketing activities using data.

Is it technically feasible for anyone to use sGTM tracking systems? To this question, I answer ‘certainly not.’ Undoubtedly, the creation and management of a Server-Side tracking system that is now necessary – compared to the ‘classic’ Client-Side system – require specialized agencies and technical experts with advanced technical skills to manage and optimize Server-Side tracking. In this scenario, dedicated budgets proportional to website traffic and server requests must be provided for Server-Side activities.

As a Google Tag Manager guru, what GTM feature(s) would you like to see in the future? And perhaps more importantly, which feature(s) would you like to see disappear?

I love the GTM Community and I think Template is one of the best features released. Thanks to Templates and the Community Gallery you can create any Tags or Variables you need.

The big problem of this community is that there are no star reviews, description reviews and the number of “downloads”. Sometimes it is difficult to understand what are the right tags or variables because there are two or three types of tags/variables with similar operations.

Another thing is about “Folder”. You cannot use it for different elements or you cannot create a nested folder.

What are the most common mistakes that Digital Analytics professionals make regarding Google Tag Manager? What should they really stop doing?

One of the most common mistakes I see is not having GTM activated if the user does not accept marketing consent. GTM does not create cookies, it is the services activated by GTM that create cookies.

Another mistake is to think that GTM has the sole purpose of installing services (Tags). It’s not simply that. You can use GTM to expand the functionality of certain services, effectively making it a temporary data lake.

You’ve built the Italian community around GTM. Have you encountered anything specific to the Italian market that stood out? What challenges have you faced? Do you have any tips for those looking to implement similar initiatives in other countries?

Unlike what happens abroad, in Italy, the potential of GTM was not initially understood, and it is still not fully comprehended today, which is why we created the community. The biggest challenge has been responding patiently to the questions of some members of the community who would like universally applicable solutions rather than adopting a personalized approach to their specific needs. My advice is to start by helping people who are at the beginning of their journey in digital analytics, trying to understand their perspective and the difficulties they are facing.

Can you provide a sneak peek of what you’ll be discussing at the Summit?

My talk will focus on Real-Time Reports in GA4 and BigQuery. I chose this topic because many of the clients who turn to my agency for GA4 implementation consultations are bewildered by the subject of real-time reports. Real-time data was easily viewable with GA3, but it now requires a detailed implementation process with GA4 and BigQuery. The implementation and analysis of real-time reports are extremely useful and valuable for all businesses that need to monitor daily incoming traffic on their digital assets, like publishing and news websites.

During my talk, I will delve into the details of some real case studies of implementations and the customized technical solutions that my team and I have developed for our clients. I’ll also share the challenges we encountered, how we resolved them during the implementation phase, and the final results we achieved.

Come check out Matteo Zambon’s Talk at the DDMA Digital Analytics 2023 on October 21th. Tickets available at: shopdigitalanalyticssummit.nl.

Tim Wilson (Analytics Power Hour): ‘We need to get comfortable with the probabilistic nature of analytics’

Tim Wilson is a seasoned analytics consultant with over two decades of experience. Lucky for us, he will be speaking at the DDMA Digital Analytics Summit on October 12th. We got the chance to talk with him beforehand, discussing analytics maturity across industries to questioning the utility of multitouch marketing attribution models. As a self-proclaimed “Analytics Curmudgeon”, he reflects on the evolving landscape of digital analytics, emphasizing the importance of shifting focus from data collection to purposeful data usage to unlock true business value.

Come check out Tim Wilson’s talk at the DDMA Digital Analytics Summit 2023 on October 12th. Tickets available at: shop.digitalanalyticssummit.nl.

Hi Tim, can you briefly introduce yourself? Who are you, and what do you do?

‘I’m an analyst. I stumbled into the analytics world by way of digital analytics a couple of decades ago, and I’ve been wandering around in a variety of roles in the world of analytics ever since. To be a bit more specific, I’m an analytics consultant who works in the realm of marketing and product analytics—working with organizations primarily on the people and process side of things. Or, to put it a bit more in data terms, I work with companies to help them put their data to productive use, as opposed to working with them on how they are collecting and managing their data.

At the moment, I’m between paid employment, as I left my last role at the beginning of this year to take a few breaths to figure out exactly what I’ll be doing next (as well as to have a few adventures with various of my kids as they fly the coop). So, “what I do” in analytics in the present tense is: co-host and co-produce the bi-weekly Analytics Power Hour podcast, co-run the monthly Columbus Web Analytics Wednesday meetup, speak at various conferences (like Digital Analytics Summit!), develop content for an analytics book I’m working on with a former colleague, and do gratuitous little analyses here and there to keep my R coding skills sharp.’

From your experience, it seems you’ve been a consultant for various industries, including healthcare, pharma, retail, CPG, and financial services. Do you see significant differences in the analytics maturity and strategy of their Digital Analytics activities, for instance looking at their governance?

‘I have to be a little careful about selection bias, as every company I work with is a company that has sought out external analytics support in some form. In theory, very analytically mature organizations—regardless of their industry—have less of a need for outside support.

Having said that, while the business models for different verticals vary, I see a lot of similarities when it comes to their analytics and analytics maturity. Perhaps painting with too broad of a brush, but every organization feels like its data is fragmented and incomplete, that there is more value to be mined from it, and that a deluge of actionable insights will burst forth if they can just get all of the right data pieces in place. Many organizations—again, regardless of their vertical—have a Big Project related to their data tooling or infrastructure under way: implementing a data lake, adding a customer journey analytics tool, rolling out a customer data platform (CDP), migrating to a new BI tool, or even simply shifting to a new digital analytics platform. Often, in my view, these efforts are misguided…but that’s the core of my talk at Summit, and I recognize it is a contrarian position.

I do think it’s worth noting that the nature of the data that organizations in different verticals have can be quite different. For instance, CPG/FMCG companies rarely have access to customer-level data for their customers, since much of the marketing and sales occurs through channels owned and managed by their distribution partners. Retailers often have both online and offline sales channels so, even if they have customer-level data in some form, the nature of that data varies based on the channel (and stitching together a single person’s activity across online and offline at scale is a losing proposition). And, of course, the sensitivity of the data can vary quite a bit as well—even as GDPR and other regulations require all organizations to think about personal data and be very protective of it, the nature of that data is considerably more sensitive in, say, healthcare and financial services, than it is in retail or CPG/FMCG.

I think I’ve given a prototypical consultant answer, no? Basically, “yes, no, and it depends!”’

On LinkedIn, you mention that you help clients choose algorithmic multitouch marketing attribution models. The once-promising idea that these models would be the holy grail of attribution has yet to be fully realized. How do you perceive this, and how do you ensure that these models are truly workable in today’s context?

‘Oh, dear. I have helped clients make those choices, but it’s always been under duress, because multitouch marketing attribution never did and never will actually be what many marketers expect it to be. I’ve delivered entire presentations and even posted a lengthy Twitter/X thread on the topic. Trying to be as succinct as possible, the fundamental misunderstanding is that multitouch attribution is an “assignment of value” exercise, but it gets treated as though it is a tool for “measuring value.” The latter is what marketers (and analysts) expect: how much value did channel X (or sub-channel Y, or campaign Z) deliver? The true answer to this question would be a calculation that takes the total revenue realized (or whatever the business metric of choice is) and then subtract from that the total revenue that was realized in a parallel universe where channel X was not used at all. In fancy-statistics-speak, this is the concept of “counterfactuals.” Obviously, we can’t actually experience multiple universes, but there are techniques that approximate them. Specifically, randomized controlled trials (RCTs, or experiments) and marketing mix modelling (MMM). Multitouch attribution, regardless of its degree of algorithmic-ness, is not particularly good at this. The other nice benefit of RCTs and MMM is that neither one relies on tracking individual users across multiple touchpoints, so a whole pile of privacy considerations—technical and regulatory—are rendered moot!

This doesn’t mean that RCTs and MMM are silver bullets. They’re inherently less granular, and they take time and effort to configure and run. Multitouch attribution has a place: it’s quick, it’s relatively easy, it can be very granular (keyword-, tactic-, or placement-level) and it provides some level of signal as to which activities are garnering a response. It doesn’t show, though, when any given response is cannibalizing a response that would have happened elsewhere in the absence of the tactic (think: branded paid search terms getting clicks that would have come through via organic search, anyway).

What I find exciting is that there is an increasing interest in RCTs, and MMM—which existed long before digital—is making a comeback. At the end of the day, the most mature companies use multiple techniques and use RCTs and MMM to calibrate each other and their multitouch attribution modeling.’

It’s often said that the field of Digital Analytics is rapidly evolving. But is this really the case? We tend to cling to what we’re used to in our field. Can you provide your perspective as an “Analytics Curmudgeon” on this?

‘Let me first don my Curmudgeon Hat and say that, as Stéphane Hamel recently put it, “digital analytics is mostly ‘analytics engineering’ (aka ‘tagging’), and very few real analyses and business outcomes.” The data collection aspects of digital analytics have certainly been rapidly evolving: it wasn’t that long ago that we didn’t have tag managers, cookies have become increasingly unreliable as a means for identifying a single user across sessions (cookies were always a hack on which client-side tracking was built, so we shouldn’t really be surprised), and privacy regulations and browser and operating system changes have added even more challenges to comprehensively tracking users. As a result, there is a lot of handwringing by practitioners about how they’re having to work harder and harder simply to backslide as slowly as possible with the data they’re collecting.

When it comes to how data actually gets used to inform business decisions, there is also a continuing evolution. Ten years ago, very few digital analysts were even thinking about SQL, Python, or R as tools they needed to have in their toolbelt. While there are still (too) many analysts resisting that evolution, I truly believe they are limiting their career growth. Increasingly (and this is not particularly new), organizations are finding they have to work with data across different sources, and that often means some combination of programmatically extracting data through APIs and working with data that is housed in an enterprise-grade database, be it BigQuery, Azure, AWS, or something else. Along with those “broader sets of data” often comes “working with data scientists,” and that opens the door to smarter, better, and deeper thinking about different analytical techniques. My mind was blown—in a positive way—when these types of collaborations introduced me to several concepts and techniques: counterfactuals (which I referenced earlier), time-series decomposition, stationarity, first differences, and Bayesian structural time series. These are enormously useful, and they’re all much, much easier to do when using a programming language like R or Python. Really, this is an “evolution” that is about bringing time-tested techniques from other fields—econometrics, social sciences, and elsewhere—into the world of digital analytics.

And, of course, AI will drive some evolution in the space, too. My sense is that it is both underhyped and overhyped—mishyped, maybe?—but there are more than enough people with Strong Opinions on that subject already, so I’ll leave it at that.

But, yes, I think “rapid evolution” is a fair description of what’s going on in digital analytics. Some of that evolution is for the better, some of it really isn’t!’

What are the trends and developments that digital analytics professionals should really focus on within the field in the upcoming years?

‘There is almost certainly a gap—potentially a massive chasm—between what the industry will focus on and what I think they should focus on. I don’t have enough hubris to declare that I’m absolutely right, but the biggest trend I see being thrust upon the industry is a decline in the availability of person-level data. We’ve already touched on this—”privacy” both from a regulatory perspective and a technological perspective are driving organizations farther and farther away from the nirvana of a “360-degree view of the customer.” That nirvana was never achievable at scale, but organizations are increasingly more aware that that’s the case.

What I’d like the analytics industry to do as a response to this reality is twofold.

First, I’d like for us to stop treating complete, user-level data as an analytical goal in and of itself and, instead, embrace incomplete and aggregated data as being perfectly adequate. This means getting comfortable with the probabilistic nature of analytics—eschewing a search for an “objective truth” and, instead, viewing our role as “reducing uncertainty in the service of making decisions.” This requires a mindset shift on the part of analysts and a mindset shift on the part of our business counterparts. It’s no small feat, but it’s where I hope things go.

Second, I hope we start realizing how easy it is to get caught up in the technical and engineering challenges of collecting and managing data, and that we start actively pushing back against those forces to focus on how we’re helping our business counterparts actually use the data we’re collecting. It’s always easier to gather and integrate more data or push out another dashboard than it is to roll up our sleeves, identify the biggest problems and challenges the business is facing, and then figure out the most effective and efficient ways that we can use data (analytics, experimentation, research) to drive the business forward.

These are, admittedly, pretty lofty aspirations, but it’s where I think we need to go if we don’t want to find ourselves becoming marginalized as simply chart-generating cost centers.’

Can you provide a sneak peek of what you’ll be discussing at the Summit?

‘You kind of teed me up for this with your last question! I’ll be diving into the idea that all data work can be divided into two discrete buckets: data collection and management work, and data usage work. I’ll make the case that, while it is easy to get seduced into thinking that there is inherent business value in data collection, there really isn’t. The collection and management of data only provides the potential for business value. To actually realize business value, we have to do things with the data, and it’s either naive or irresponsible (or both) to expect our business counterparts to shoulder the entire load for that.

I’ll dive into some of the powerful forces that push us (and our business counterparts) to think that there is business value in data collection itself, and then I will (briefly) provide a framework for putting data to meaningful use.’

Come check out Tim Wilson at the DDMA Digital Analytics Summit 2023 on October 12th. Tickets available at: digitalanalyticssummit.nl.

Server-side tag management at HelloFresh as crucial factor to ensure privacy and compliance

HelloFresh recently made the switch to server-side tagging. It has allowed the meal delivery company to have much more control over tracking on their platform that operates in more than 17 countries, according to Alejandro Zielinsky, Global Digital Tracking/Measurement Lead at Hellofresh. During the DDMA Digital Analytics Summit, Alejandro will talk at length about the technical solution it took to make this turn. In this interview, conducted for the Life After GDPR Podcast, he gives a little sneak peek.

About Alejandro: Alejandro Zielinsky is Global Digital Tracking/Measurement Lead at Hellofresh. He and his team are responsible for collecting data and setting up the technology to give marketing and product analysts within Hellofriesh the tools they need to do their jobs.

Watch/listen to the full podcast below:

Why server-side tagging?

Everyone is talking about server-side tagging these days. At HelloFresh, it forms the solution to increase the quality of the data they collect. Alejandro: “Before the switch, we were looking for a platform that allowed us to measure every hit we got. In total, we record about 3 billion events per month for all brands in all countries. That’s about 1,000 hits per second. Logically, we needed a platform that could handle this, and at the same time connect to APIs to enrich our data internally. That’s why we ended up choosing server-side tagging.’

Another reason for this setup is because HelloFresh wanted to improve browser performance. Alejandro: ‘We were having a lot of problems with marketing tagging in the browser, on the client side. Also, locally, HelloFresh was quite a jungle. In each country we had a different setup to solve the same problem, affecting browser performance. Each country had separate tech managers, with each country having its own container.

Now most of the data collection happens server-side. Whether it’s for Google analytics, for first-party solutions, or for sending data to marketing vendors, like Facebook, TikTok or Snapchat. Everything works with server-side APIs, using the GTM server, Alejandro explains. “With this setup, we can enrich data on-the-fly. We can remove data that we don’t actually need. In a number of countries this is very important, for example in Japan or in the GDPR countries in Europe.’

This and much more will be covered during the Life After GDPR Podcast. Among other things, they will talk about:

  • How HelloFresh finally implemented Server-side Tag Management
  • How they addressed privacy and consent, and how server-side tagging played a vital role in it
  • The challenges they encountered to be compliant with local laws around the world.

Facilitating Data-Driven Decision Making at Adyen | With Melody Barlage

As a payment processing company, Adyen has been working with a lot of sensitive data since its origin. Accordingly, Adyen has treated data very carefully in a secure way, Melody Barlage, Product Manager of Business Intelligence at Adyen, explains. The same goes for the, maybe less sensitive data, they later added to their database. During the DDMA Digital Analytics Summit, Melody will talk about how Adyen handles data. In this interview, conducted for an episode of the Life After GDPR podcast (by Rick Dronkers), she already gave a small taste of what to expect during her presentation.

About Melody: Melody is Product Manager Business Intelligence at Adyen. She and her team facilitate a large team of data analysts and data end users throughout the organisation. Melody will be speaking at the DDMA Digital Analytics Summit 2022 on October 13. Tickets available at: digitalanalyticssummit.nl/tickets

Different layers of security

Currently, a large part of the world has made a payment though Adyen. Accordingly, some would say Adyen has a lot of sensitive data about pretty much everyone. It’s good to know though, that all that data is not Adyen’s, but of the merchants they service, Melody claims: ‘We’re just there to process the data and keep it safe. We use it internally to improve our processes and report to regulatory institutions, but that’s it. But naturally, because we process massive amounts of sensitive data, we do everything in our power to prevent any breaches. And if there’s a breach, that data is always tokenized or hashed.

Other than that, we always work with hashed payment references. Also, a lot of data is aggregated, not only because of privacy considerations but also because of the sheer amount of data we’re processing. When data gets into Looker we add another layer of security, all to make sure that nobody within Adyen can get access to data they should not be able to. Finally, everybody within Adyen is considered a security officer. Everybody should always ask themselves if the data they have is really necessary for what they do and what they’re aiming to do.’

This mindset has been there from Adyen’s beginning, Melody says: ‘As you can imagine, for us, data about online marketing, Google Analytics, or the tracking on our website is not as important for us as it is for e-commerce companies. From the beginning, we started with much more sensitive data. The data we added after we treated with the same regime. We consider all data as valuable, but it is not equally as sensitive. But because we already possessed that mindset we decided to treat all data in the same way.’

Fraud and regulations

There are differences in what is allowed in the treatment of data across the world, for instance between the US and the EU. And because Adyen is active all around the world, they take local regulations very seriously; they have offices all around the world, with local expertise. Naturally, this impacts Adyen’s local services. Melody: ‘We have products which you could use for commercial purposes in some parts of the world, but not in others because of regulations. Consent regulations, for instance, can be very different. But in some cases, consent is not always needed. Obviously this is the case when it comes to chargeback data coming from fraudulent transactions. There’s a lot more possible dealing with these cases than for commercial reasons. We’re actually obliged to report suspicious transactions to the Financial Intelligence Unit (FIU).’

At Adyen, there are a lot of measures in place to discover suspicious activity. Melody: ‘If there is a suspicious transaction, we flag it and, if required, we pass it on to the authorities. But we do this not only on a transactional basis. Sometimes transactions only become suspicious in context. For instance, at Chanel, a 50K transaction seems normal. If you do it 20 times in a row though, it might not be normal.’

On-premise software as a USP

Looker is Adyen’s data modelling and data visualization tool. Adyen currently has around 200 developers at least partially working on creating models and data visualization in Looker, Melody explains: ‘Part of the reason we chose Looker is that we try to run as much as possible on-premise and open source. Also, when it comes to functionalities Looker has a very good way of managing permission and consent. Essentially every data visualisation is possible, but whether you can see it depends on the data you’re allowed to see.’

The main reason for the choice of on-premise software is not arbitrary. Of course, it drives performance, but it was also chosen because of privacy and security reasons. The choice is in line with all of Adyen’s other on-premise activities, Melody Barlage explains: ‘Before merchants started working with cloud services, our on-premise way of working was actually one of our unique selling points. We could tell merchants we keep all their data to ourselves. Nowadays merchants use cloud services fervently, so we’ll have to see how this approach will develop in the future.’

Tools, processes and guidelines are context-driven

At Adyen, they work with immense amounts of data. Naturally, they’ve built an appropriate tech stack to handle this data, Melody explains: ‘To some extent, we are bound to certain tools, like the Hadoop Spark framework. It does an awesome job of storing massive amounts of data. We also work with Trino.io, a query-on-everything type of engine, which will be our new connector between Spark and Looker. We also increasingly make use of Druid, a database of sorts, which allows inflexible, but super rapid querying. Still, you have to keep in mind that our stack build-up is all context-driven. Your tech set-up really depends on your company. Eventually, everything comes down to making sure that everybody can find the right data they need to do their job and that data is of good quality. With  currently 2500 employees, we’re still in the process of professionalising this by imposing more and more rules.’

Adyen’s 20X mindset

Adyen’s developers know that the work they’re doing is very delicate. After all, they don’t want transactions to fail, Melody notes. ‘This is the reason why our developers consider it normal to strictly follow our guidelines, to make sure products are sustainable.

Some might point out that it would be hard to strive for future goals when one is this careful in their company practice like Adyen. Still, according to Melody, they aim high: ‘In our team, I want to achieve a 20X mindset, in which we ask ourselves continuously how we want to work if we have 20 times more developers, or 20 times more the merchants, et cetera. How do we make sure we have a user-friendly environment for everybody in the organisation with everything 20 times more than we currently have?

The presence of this mindset varies from team to team. We manage it by having teams like mine, which have a central overview. But more importantly, it is the futuristic thinking that is embedded in our company that pushes this. If someone finds something important, they can take ownership and do it, no matter where they’re coming from. If they have a good story, they can go ahead and do it. There’s a lot of freedom.’

Making tech work is easy, making people work together is the challenge

Some people say that making technology work is easy. Yes, it requires a lot of work, but in the end, organisations always manage it. It’s the people and how they work together that often forms the challenge. A lot of organisations struggle with this, Melody claims: ‘Especially in large enterprises, where discussions about the centralisation or decentralisation of teams come up regularly, this is the case, also at Adyen at a certain point, we had so many data people that we decided to decentralize them.

But as I said before, it also comes down to what works for your organisation. It certainly has to do with the number of people, but also with what kind of data you work with. What is also important: you have to adjust and learn. The upside for us is because we’re quite flexible, that we’re not that scared to completely move things around.’

On October 13 Melody will speak at the DDMA Digital Analytics Summit in Amsterdam. She will give you a taste of how Adyen uses data. She will provide some practical examples and elaborate on how they have organised these and how they’ve made them work. She’ll touch on it a tiny bit from a technical perspective, mainly because their big data platform is very impressive. Tickets available at: digitalanalyticssummit.nl/tickets