Welcome to the first of potentially many posts in a newly-found yearly tradition. At some point during the year, I noticed that I don’t have a continuously updated list of the things I’ve been up to outside of this blog. And even besides that, it is sometimes hard to keep up with all the content that eventually makes it onto the blog. To help with both, I’m planning to release a post like this every year, potentially in the last week of the year. I’m going to try my best to remember all the conferences, events, webinars, and foreign posts (or even update a draft post throughout the year) and put them in some sort of order. Thank you for another great year on this blog and I hope you enjoy all the different pieces of content! I have a YouTube channel now! Writing blog posts definitely is my go-to […]
Category: Web Analytics
Please, stop comparing Adobe Analytics to Google Analytics
This post is going to be a deviation from the “normal” content on this blog. Its purpose is to address one of the questions I received most often from a lot of people reading my posts. The title might already give away what that question is: “Frederik, in your opinion, should companies buy Adobe Analytics or Google Analytics?” And I think there is something fundamentally wrong with this question. I think the above question can only be answered through some absurd level of generalization that does not do justice to both tools. There are some agencies or consultants who end up doing this comparison to either appear neutral and independent, or drive SEO traffic to their own sites. This annoyed me to a point where I started writing this post to have my personal answer ready at hand in the future. Bear with me on this one. To be able […]
Retention Analysis in Adobe Analytics – Part 1: Cohort Tables and Builtin Functionality
User Retention is crucial to any digital offering. If you optimize your offering to a point where users come back on their own, you can not only save on marketing cost but also engage your existing users more. This makes retention analysis a prime example for how digital analytics can provide tangible business value. In this post, we are going to take a look at how we can analyze user retention with the most advanced digital analytics tool, Adobe Analytics. We are going to start in this post with the builtin analytics dimensions and metrics, then take a look at cohort tables, and in the next post even build our own Segments and Calculated Metrics to help us understand retention. Let’s get started! Builtin Retention Metrics and Dimension To start things off, we will take a quick look at what Adobe Analytics has to offer out-of-the-box to help us understand […]
Call for contributions! Introducing the Open Adobe Analytics Component Repository
Over the last few months I created quite a lot of Calculated Metrics and Segments for this blog. While the feedback has been great, it became more and more difficult, for me and others, to keep up with all the different metrics and where exactly I used them. I’ve been using a privat Github repository to keep track of everything I create which I now make available to the public. I will put all the metrics and segments I have already created on there as I migrate them from my private repo. The same will be true for future posts on my own blog. My hope is that this will help me stay on top of all those components and maybe help somebody else to find them more quickly. But since I host it on Github, why not make this a collaborative effort? Share your work and earn kudos From […]
Time Series Analysis through Moving Averages – Statistics in Adobe Analytics
In what has become one of the most read series on this blog I am showing some examples of what Adobe Analytics has to offer in regards to statistical analysis. In the previous posts we took a look at simple averages and standard deviations, regression analysis and even forecasting. In this post we are going to use a variation of the simple mean called moving average. When dealing with time series data we might encounter what is called “noisy data”. Instead of showing as a steady line our KPIs might go up and down from day to day, making it hard for us to judge where the general trend is headed. One way of solving this is through the regression modeling we did before, which gives us a straight approximation line. But what we can also do is average the data for a defined window along our series, which is […]
Summary Report Suites in Adobe Analytics
Ever since Adobe released Analysis Workspace in 2016, customers were looking for a way to combine data from multiple Report Suites within a single Project in Analysis Workspace. Earlier in 2020 we were finally able to pull data from more than one Report Suite in Workspace, since Report Suites could now be selected on a Panel level instead of for the whole project. While this feature is awesome and a huge improvement, users like myself still wanted to combine multiple Report Suites in only one Freeform Table within the same Panel. Unfortunately, we still have to invest some work to get a view like this, where the right Table combines data from the two Report Suites on the left: The screenshot above was taken from a Report Suite which was created to summarize data from other Report Suites. This can be done without any changes to the implementation or additional […]
Advanced Time Series Analysis through Linear Regression โ Statistics in Adobe Analytics
Previously in this little series, we took a look at how we can describe our trended data by using the statistical Mean and Standard Deviation. While this works quite well for data that doesn’t change much over time, it is rather limited in regards to take trends into account. With this post, we are doing something about that issue by using Linear Regression techniques. At the end of this post, you will get an Analysis Workspace project like below, where we can judge trends in data and see changes over time: Let’s get our hands dirty! Limitations of Mean and Standard Deviation Before we start, I want to explain the problem outlined above a bit better. Please consider the following graph I generated with the Workspace from the previous post and some demo data: What we see is a clear trend in our data, since our daily Unique Visitors are […]
Simple Time Series Analysis through Standard Deviation – Statistics in Adobe Analytics
In my last post, we took a look at how Descriptive Statistical Analysis can help us understand our site performance using the simple Mean. I introduced the concept of conditional counters to help us identify our top- and bottom-performing sites. Today we are going to extend our knowledge of descriptive statistical methods by using Standard Deviation on trended data and apply conditional counters to it as well, but with a new spin. If conditional counters are new to you, it might help to check out that last post! As last time, we are setting ourselves a goal for this post. At the end, we want to have a nice workspace to help us understand our trended data better. We need a way to judge if the fluctuation in our data is within an expected range and how often it is not. This is what we are going to build: Let’s […]
Simple Mean and Conditional Counters – Statistics in Adobe Analytics
In my last post, we took a look at how we can predict the future through Regression Analysis with Adobe Analytics and visualize it in Analysis Workspace. While that was a quite advanced post, there are a lot of things we can do using basic statistical analysis. This is what we are going to look at in this post, exploring some ways to describe our data in a standardized way. At the end of this post, we want to describe our relative page performance for a website like this, showing us top- and low performing pages and how many there are of both: Describing ranked website performance relative to the Mean This first part will show how we can level-up our ranked reports. Let’s pretend we want to judge how certain pages on our website are performing. To do this, we might start with a simple table containing our Page […]
Predictive Regression Analysis – Statistics in Adobe Analytics
Adobe Analytics is awesome for analyzing historical data. Besides Segments, Drilldowns or Derived Metrics, it also offers some advanced statistical functions like Regression Analysis. Here are some examples for the different regression models that are available today: It would be really cool if we could use this functionality to predict the future with some regressive models! This is what this article is going to describe by using advanced calculated metrics. In the end, we want to have a graph like this, with the historical and future data in the same visualization: We will go through the whole process of generating a metric like shown above. If you just want the result, you can scroll down to the bottom of this article, where I show the complete metric. Let’s start! Statistics 101: Simple Linear Regression in Adobe Analytics To start things off, let’s remind ourselves what regression analysis does. To keep […]
So, is Web Analytics your dream job?
I’ve been working in Web Analytics for over a decade. During that time I had the pleasure to meet a lot of people: Analysts, product owners, marketeers, architects, developers, and so on. I hired a bunch of them, applied to others myself, onboarded and trained a whole lot over the years. No matter who I’ve been talking to, sooner or later, one type of question would always come up: Will this be fun? Am I going to be okay? There are a lot of articles out there focused on the skills needed to start with Web Analytics. As always, Google can help you find those (or go to Julien’s Blog if you want a recommendation). There also are some talking about the necessary mindset. With this one, I will try to give you an impression on the qualities I observed while talking to Web Analysts who love what they do. […]
Supercharge your Adobe Analytics Classifications with Google Sheets and Automation
Classifications are one of the best features of Adobe Analytics. They allow to enrich and translate tracked values by uploading classification files. One of the most common use cases is handling marketing campaign tracking codes, which can be translated from technical ids to understandable details about the campaign. This can be automated to a great extend, which is what this article will be about. We are going to look at the architecture of our solution and plan our implementation. Right after that, we will start building our spreadsheet in Google Sheets and create an automatic upload to Adobe Analytics using the Python programming language. If you are just interested in the final script, you can find it on Github. What we love and hate about Classifications When you are using Adobe Analytics, chances are pretty high that you are already using some form of classifications. Rightfully so, because they are […]
Adobe Analytics Introduction: Terms and Concepts
This is one of several post aiming to give an introduction into Adobe Analytics. They are intended as both tutorials and references for future use. While there already are a lot of good sources for this, some are quite dated and miss connections to recently released features and enhancements. In this post, I will explain some general things that are helpful to know when starting with Adobe Analytics. We will go over different interfaces to analyze data, explain Dimensions, Metrics, and Events and name some common integrations. Know what you are looking at: Dimensions One of the most important building blocks of Adobe Analytics are Dimensions. With Dimensions, we capture descriptive values on our websites or in our apps. Many people call them variables when explaining the general concept. On a website we might record the name of a certain page in a dimension. This would allow us to report […]
Generating more business value with the Adobe Analytics dashboards App
The Adobe Analytics dashboards App has been out for some days now. It has been one of the most demanded features among Analytics users for years. Personally, I had to disappoint my business users for quite some time whenever they asked for an App. So naturally, I was quite happy when it finally came out. Before the app arrived, we had to build workarounds to enable people to take Analytics data wherever they go. At my company we utilized Power BI to pull data from Analytics and offer it in some form of mobile app. That was a huge pain, since we had to rebuild things we already had in Analysis Workspace and maintain two products. We also had to make huge compromises regarding interactivity with data and visualizations. I’m very happy we don’t need to do that any more! One of the concerns I had before I gained Beta […]
Importing Organic Google Search data to Adobe Analytics with a single script
Some time ago, I published an article explaining how to get Google Search performance data from the Google Search Console to Adobe Analytics. For that post, I explained to query the Google Search API, write the result to an Adobe Analytics Data Source file, and upload it to Adobe Analytics. The same can be achieved in a more automated way using the Adobe Analytics Data Sources API, which is what this article is about. It explains how to use a script I published on Github. If this feels to advanced, feel free to go back to the old article. So, why another article about this topic? The old post received a lot of attention and led to some companies adopting the methods I described there. But if you try to implement it in a production environment, you would need to take care of some things yourself. For example, you need […]
Getting Google Search Keywords into Adobe Analytics
While Adobe Analytics is a much more mature solution compared to Google Analytics, the latter always had an advantage when it comes to Search Keywords. It shouldn’t surprise us that the company who offers both the search engine and the analytics tool has some integration between them. While it was easy to get search keywords from the target URL in the past, those times are gone for years now. Ever since then, business were struggling to know what their visitors were initially looking for when they came to their webpage. This article outlines a couple of ways on how to achieve this in Adobe Analytics. For this post we will take a look at the integration Adobe offers to Analytics Prime customers called Advertising Analytics. Right after that we are going to build our own integration based on the same method to get some insight into Google Ads performance. To […]
Trying out the new Adobe Analytics App
Adobe Analytics still is the most complete solution for Digital Analytics. But for years, there has been one thing missing: A mature way to use dashboards on the go, without using your computer. While Analytics is usable on mobile browsers on a technical level, it is not the best user experience for both Analysts and Business Users. This is why a real Mobile App has been one of the most requested features over the years. And guess what: Adobe just released one! Who this App is made for There is one important thing to know about this new App before diving into the features and interface. Let’s ask ourselves first who the target audience for this app is, because it most likely is not primarily made made for you if you are an Analyst. It is not made to offer the same feature set that Analysis Workspace offers and I’m […]
Building an Enterprise Grade OpenSource Web Analytics System โ Part 7: Analytics Dashboard
This is the seventh part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are building an Analytics Dashboard in Kibana for our data in Elasticsearch. In the previous post we build the connection from Kafka to Elasticsearch and Clickhouse to store the data. If you are new to this series it might help to start with the first post. We have come a long way in this series. We built everything from the client implementation with Snowplow to the processing and enrichment pipelines with Kafka and Python and stored all the data in Elasticsearch. Now it is time to make that data accessible in an appealing way to analysts and business users. The obvious solution for Elasticsearch is Kibana, which is developed by the same company and is designed to work perfectly with Elasticsearch! Webanalytics Dashboard in Kibana In Kibana, […]
Building an Enterprise Grade OpenSource Web Analytics System โ Part 6: Data Storage
This is the sixth part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are taking a brief look on what we can do with the data we collected and processed with Clickhouse. In the previous post we built a persisted visitor profile for our visitors with Python and Redis. If you are new to this series it might help to start with the first post. During this series we defined multiple topics within Kafka. Now we have different levels of processing and persistence available. If we want to keep any of it, we should put it in a persistent storage like a Data Lake with Hadoop or a Database. For this project, we are using Elasticsearch and dipping our toes in a database called Clickhouse for fun! Feeding Data into Elasticsearch From the previous part, we have a nice Kafka […]
Building an Enterprise Grade OpenSource Web Analytics System โ Part 5: Visitor Profile
This is the fifth part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are going to build a visitor profile to persist some of the data we track with Python and Redis. In the last post we processed the raw data using Python and wrote it back to Kafka. If you are new to this series it might help to start with the first post. Now that we have a nice processed version of our events, we want to remember certain things about our users. To do this, we are going to create a Visitor Profile in Redis as high performance storage. The process for persisting values will look like this: Building our Visitor Profile First things in this part, we are setting up a little helper script that will take our processed tracking events and flatten them. It looks […]
Building an Enterprise Grade OpenSource Web Analytics System โ Part 4: Data Processing
This is the fourth part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are building the processing layer to work with our raw log lines. In the last post we used Nginx and Filebeat to write our tracking events to Kafka. If you are new to this series it might help to start with the first post. At this part of the series, we have a lot of raw tracking events in our Kafka topic. We could already use this topic to store the raw loglines to our Hadoop cluster or a database. But it would be much easier later on to do some additional processing to make our life a litte easier. Since Python is the data science language today we will be using that language. The result will then be written to another Kafka topic for further processing […]
Building an Enterprise Grade OpenSource Web Analytics System โ Part 3: Data Collection
This is the third part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are setting up the tracking backend with Nginx and Filebeat. In the last post we took care of the client side implementation of Snowplow Analytics. If you are new to this series it might help to start with the first post. Now that we have a lot of data that is being sent from our clients, we need to build a backend to take care of all the events we want. Since we are sending our requests unencoded via GET, we can just configure our web server to write all requests to a logfile and send them off to the processing layer. Configuring Nginx with Filebeat In our last project we used a configuration just like the one we need. As web server, we used and will […]
Building an Enterprise Grade OpenSource Web Analytics System โ Part 2: Client Tracking
This is the second part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are setting up the Client Tracking using the Javascript tracker from Snowplow Analytics. In the last post we took a look at the system architecture that we are going to build. If you are new to this series it might help to start with the first post. When building a mature Web Analytics system yourself, the first step is to build some function into your app or website to enable sending events to the backend analytics system. This is called client side tracking, since we rely on the application to send us events instead of looking at logfiles alone. For this series we are going to look at website tracking specifically, but the same principles apply to mobile apps or even server side tracking. Almost every mature […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 1: Architecture
Some time ago I wrote a litte series on how to amp up your log analytics activities. Ever since then I wanted to start another project building a fully fledged Analytics system with client side tracking and unlimited scalability out of OpenSource components. This is what this series is about, since I had some time to kill during Easter in isolation ? This time, we will be using a tracker on the browser or mobile app of our users instead of logfiles alone, which is called client side tracking. That will give us a lot more information about our visitors and allow for some cool new use cases. It also is similar to how tools like Adobe Analytics or Google Analytics work. The data we collect has then to be processed and stored for analysis and future use. As a client side tracker, we will be using the Snowplow tracker. […]
Analysis Workspace Hacks (AGE) – Metric Targets
This is a post in the Adam-Greco-Edition (AGE) series of posts. They aim to iterate on some great posts by Adam Greco, showing some different approaches to achieve similar things. In another great Post, Adam Greco showed how we can have Metric Targets in Analysis Workspace. His approach includes setting up a Data Source to import Goals to a Custom Event. This is a very nice approach, but has some serious limitations. Because it utilizes Data Sources, all their limitations apply (see documentation). Most importantly, data can not be deleted or changed once it has been imported. Also we need to sacrifice Custom Events for every Goal we set. The setup is also very involved and not suited for non-techie people. What I would like to have is a Goal Metric that does not use valuable Custom Events, is changeable over time, and understandable and usable by non-technical users. As […]
Analysis Workspace Hacks (AGE) – Average Daily Unique Visitors
This is a post in the Adam-Greco-Edition (AGE) series of posts. They aim to iterate on some great posts by Adam Greco, showing some different approaches to achieve similar things. In one of his posts Adam Greco shows a way to replicate the Daily Unique Visitors Metric from Reports & Analytics in Analysis Workspace. His approach involves creating a Calculated Metric for a given time range, summing up the Visitors for each day. There are some limitations to that approach. The obvious one is that we need a new metric for each date range we want to analyze; We can’t use a 7-day Metric if there are 8 days to analyze. Second, Visitors are not deduplicated but summed up over all days in the reporting window (just as in the old interface); So a Visitor visiting our site three times would be counted as three Visitors. Last, the name could […]
Analysis Workspace Hacks – Next and Previous Page Report
Analysis Workspace is the most capable solution for Web Analysts today. It allows us to switch between building a Dashboard or old-school Report or something in the middle on the fly. It has surpassed the old Reports & Analytics Interface in functionality and workflow effectiveness and leaves you longing for it once you start using different solutions. But there is one thing that is not that awesome in Analysis Workspace yet: Pathing. Once you activate Pathing for a custom prop, the old interface gives you Next and Previous Reports for that prop, just like with the Page Dimension: As a result we get a nice table with the Next or Previous Dimension Items for a given Item. Hacking Analysis Workspace’s Flow Visualizations The closest thing to that functionality is the Flow Visualization in Analysis Workspace. It allows us to see a Flow of Users between Dimension Items or even across […]
Analysis Workspace Hacks – Link Events on Page Reports
Adobe Analytics gives us two types of events to use for our tracking implementation. With Page Tracking (calling s.t() in Websites or trackState() in Apps) we are supposed to measure when a page has been viewed. If we want to measure interactions on a given page, we would use Custom Link Tracking (s.tl() in Web and trackAction() in Apps) for that. The reasoning behind that is quite simple. If there was only one function, we would either end up with increased Page Views for every on-page event or have to take care of the distinction ourself by using valuable props or eVars. So from a simplicity standpoint this approach makes a lot of sense. But there is one problem: When using Custom Link Tracking, you can not set a pageName for that call. Adobe Analytics just ignores whatever you set for the pageName, because pageNames only make sense in the […]
Migrating from Androidโs BroadcastReceiver to Google Play Install Referrer API with Adobe Analytics
Adobe Analytics can track not only websites, but mobile apps just the same. This is achieved by using the Adobe Experience Platform Mobile SKDs for native iOS and Android apps. One very interesting part of tracking mobile apps is known as acquisition tracking, which looks at how users found the tracked app. To help with this, Adobe exposes some functionality in their SDKs to listen for the events that the mobile operating system is using to tell apps about the way they have been installed. This happens โautomagicallyโ on iOS but needs some custom implementation on Android. BroadcastReciever and Install Referrer API on Android Adobe requires to use a very old implementation method called BroadcastReceiver. That method relies on the Google Play Store App sending a message (a broadcast) to the app that has just been installed, telling it about the details of the install (like which marketing campaign has […]
Building your own Web Analytics from Log Files โ Part 6: Conclusion
This is the sixth part of the six-part-series “Building your own Web Analytics from Log Files”. In this series we built a rather sophisticated logging and tracking functionality for our website. We used OpenResty to identify and fingerprint our users via cookies, stored that information to log files which were shipped to Elasticsearch and visualized with Kibana. Web Analytics democratized By using those techniques, we are able to use what we already have (log file processing) to answer questions about our users. Under best conditions this doesn’t even lead to a bigger technical footprint. This way we can have deep insights into our user behavior without external tools. Even as a startup or hobby developer you are now able to put the user first on your digital platforms. Next steps While this series is done for now we have a starting point to further build our platform. With some frontend […]
Building your own Web Analytics from Log Files – Part 5: Building our first Dashboard
This is the fifth part of the six-part-series “Building your own Web Analytics from Log Files”. At this part of the series we have our log files in Elasticsearch with indices like “custom-filebeat-tracking-logs-7.4.0-2020.01.03”. First thing is to set up a Kibana index pattern for this. Kibana Configuration In Kibana we go to Management -> Index Patterns -> Create index pattern. As Index pattern we use “custom-filebeat-tracking-logs-*”, which gives us all the indices with our daily index pattern. In the next step, we set the Time Filter field name to “@timestamp”. This is the timestamp that marks the point where Filebeat indexed the document. This is fine for now, we click “Create index pattern” and are done with this part! Checking our Data Now, let’s head to the Discover section in Kibana and look at our index pattern. And there it is: Our log entries show up like we wanted: This […]
Building your own Web Analytics from Log Files – Part 4: Data Collection and Processing
This is the fourth part of the six-part-series “Building your own Web Analytics from Log Files”. Legal Disclaimer: This post describes how to identify and track the users on your website using cookies, IP adresses and browser fingerprinting. The information and process described here may be subject to data privacy regulations under your legislation. It is your responsibility to comply with all regulations. Please educate yourself if things like GDPR apply to your use case (which is very likely), and act responsibly. In the last part we have built a configuration for OpenResty to generate user and session IDs and store them in browser cookies. Now we need a way to actually log and collect those IDs together with the requests our web server handles. OpenResty Configuration To be able to log our custom variables we need to announce them to Nginx. This is done right in the server-part of […]
Building your own Web Analytics from Log Files – Part 3: Setting up Nginx with OpenResty
This is the third part of the six-part-series “Building your own Web Analytics from Log Files”. Legal Disclaimer: This post describes how to identify and track the users on your website using cookies and browser fingerprinting. The information and process described here may be subject to data privacy regulations under your legislation. It is your responsibility to comply with all regulations. Please educate yourself if things like GDPR apply to your use case (which is very likely), and act responsibly. Identifying Users and Sessions One of our goals for this project is to be able to tell how many people are using our site. This means we need a way to differentiate between the users on our site. One approach would be to look at the IP addresses of our users. This is not very precise since all devices with the same internet connection share an IP address. Especially for […]
Building your own Web Analytics from Log Files – Part 2: Architecture
This is the second part of the six-part-series “Building your own Web Analytics from Log Files”. Architecture Overview To start of this series, let’s remember what we want to achieve: We want to enable a deeper understanding of our website users by enriching and processing the log files we already collect. This article looks at the components we need for this and how to make our life as easy as possible. To achieve our goal, we need to teach our web server to identify our users, store information about the activity in the log files, ship those files to storage and make it actionable with a way of visualizing it. Because I believe in Open Source Software, we will look at our options among that category. Another requirement is to introduce as less components as possible and keep scalability in mind. Choosing our Web Server The first part of our […]
Building your own Web Analytics from Log Files – Part 1: Motivation
This is the first part of the six-part-series “Building your own Web Analytics from Log Files”. What is Web Analytics As the owner or administrator of a website, you will go through different phases of maturity. When you are just starting with a hobby or web project, you will most likely care about the technical setup and gaining traction. Once everything is up and running, you will start asking yourself questions like How many People are using my website? How many of those are new Visitors? Which page on my website attracts the most (new) Visitors? Those questions are Web Analytics questions. It is what Web Analysts spent their time on to deliver value to the business behind it. To achieve that, we most commonly use tools like Piwik (Matomo), Google Analytics, or Adobe Analytics. Those tools rely on some Javascript code that needs to be integrated on a website […]