On one hand, Adobe Analytics remains my favorite web analytics tool on the market. The longer I use it, the more I appreciate all the well thought-out features, from data collection to processing, storage, and analysis. Those features are even more impressive when compared with what Google Analytics has to offer. And yet, on the other hand, even I can’t avoid having to work with Google Analytics in some way or another. In a large, global company, it is basically unavoidable to find Google Analytics on some small, long forgotten marketing landing page in some market. It gets even worse: Up until last year, I personally had to maintain an inherited Google Analytics instance on a legacy website and app. What a cruel world! Besides those cases, where someone in your company actually wants to use Google Analytics, there are also more forgivable cases. For example, a company may be […]
Tag: Python
Monitor Adobe Analytics usage for free with Power BI and Python
Adobe Analytics is, without a doubt, the most complete and feature-rich product for Web Analytics and Reporting on the market. I won’t even try to list all its features, since I would definitely forget some or would have to update the list in a few months as new functionality is released. And while I, as an analyst and power user, love to have all those great tools available, they create a challenge for me in my other role as an analytics admin. All of those features bring complexity to the every-day work of our business users. For example, when Analysis Workspace was released in 2016, it meant that users had to learn a new interface to get the most value out of Adobe Analytics. But as an admin who knows their users, I have a strong feeling that some people still use the old Reports & Analytics interface in 2021. […]
Visualizing Adobe Analytics Report Suites for free with Python and Power BI
Adobe Analytics is super flexible in the way it can be set up to exactly match all requirements of business users, analysts, and developers. A crucial part of any implementation is the creation and configuration of the Report Suites, which can be seen as the backend database of Adobe Analytics, that will hold the events sent to Adobe Analytics. In theory (and practice in some setups), each and every Report Suite can have a completely individual set of variables and metrics. However, having the option to create an individual configuration of dimensions and events for each Report Suite comes with a hefty long-term cost. For example, each and every setup needs to be implemented in Adobe Launch, where the on-page data layer needs to be matched to the dimensions and metrics of the Report Suite. If every Report Suite is configured differently, a lot of work needs to be put […]
Summary Report Suites in Adobe Analytics
Ever since Adobe released Analysis Workspace in 2016, customers were looking for a way to combine data from multiple Report Suites within a single Project in Analysis Workspace. Earlier in 2020 we were finally able to pull data from more than one Report Suite in Workspace, since Report Suites could now be selected on a Panel level instead of for the whole project. While this feature is awesome and a huge improvement, users like myself still wanted to combine multiple Report Suites in only one Freeform Table within the same Panel. Unfortunately, we still have to invest some work to get a view like this, where the right Table combines data from the two Report Suites on the left: The screenshot above was taken from a Report Suite which was created to summarize data from other Report Suites. This can be done without any changes to the implementation or additional […]
Supercharge your Adobe Analytics Classifications with Google Sheets and Automation
Classifications are one of the best features of Adobe Analytics. They allow to enrich and translate tracked values by uploading classification files. One of the most common use cases is handling marketing campaign tracking codes, which can be translated from technical ids to understandable details about the campaign. This can be automated to a great extend, which is what this article will be about. We are going to look at the architecture of our solution and plan our implementation. Right after that, we will start building our spreadsheet in Google Sheets and create an automatic upload to Adobe Analytics using the Python programming language. If you are just interested in the final script, you can find it on Github. What we love and hate about Classifications When you are using Adobe Analytics, chances are pretty high that you are already using some form of classifications. Rightfully so, because they are […]
Importing Organic Google Search data to Adobe Analytics with a single script
Some time ago, I published an article explaining how to get Google Search performance data from the Google Search Console to Adobe Analytics. For that post, I explained to query the Google Search API, write the result to an Adobe Analytics Data Source file, and upload it to Adobe Analytics. The same can be achieved in a more automated way using the Adobe Analytics Data Sources API, which is what this article is about. It explains how to use a script I published on Github. If this feels to advanced, feel free to go back to the old article. So, why another article about this topic? The old post received a lot of attention and led to some companies adopting the methods I described there. But if you try to implement it in a production environment, you would need to take care of some things yourself. For example, you need […]
Getting Google Search Keywords into Adobe Analytics
While Adobe Analytics is a much more mature solution compared to Google Analytics, the latter always had an advantage when it comes to Search Keywords. It shouldn’t surprise us that the company who offers both the search engine and the analytics tool has some integration between them. While it was easy to get search keywords from the target URL in the past, those times are gone for years now. Ever since then, business were struggling to know what their visitors were initially looking for when they came to their webpage. This article outlines a couple of ways on how to achieve this in Adobe Analytics. For this post we will take a look at the integration Adobe offers to Analytics Prime customers called Advertising Analytics. Right after that we are going to build our own integration based on the same method to get some insight into Google Ads performance. To […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 6: Data Storage
This is the sixth part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are taking a brief look on what we can do with the data we collected and processed with Clickhouse. In the previous post we built a persisted visitor profile for our visitors with Python and Redis. If you are new to this series it might help to start with the first post. During this series we defined multiple topics within Kafka. Now we have different levels of processing and persistence available. If we want to keep any of it, we should put it in a persistent storage like a Data Lake with Hadoop or a Database. For this project, we are using Elasticsearch and dipping our toes in a database called Clickhouse for fun! Feeding Data into Elasticsearch From the previous part, we have a nice Kafka […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 5: Visitor Profile
This is the fifth part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are going to build a visitor profile to persist some of the data we track with Python and Redis. In the last post we processed the raw data using Python and wrote it back to Kafka. If you are new to this series it might help to start with the first post. Now that we have a nice processed version of our events, we want to remember certain things about our users. To do this, we are going to create a Visitor Profile in Redis as high performance storage. The process for persisting values will look like this: Building our Visitor Profile First things in this part, we are setting up a little helper script that will take our processed tracking events and flatten them. It looks […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 4: Data Processing
This is the fourth part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are building the processing layer to work with our raw log lines. In the last post we used Nginx and Filebeat to write our tracking events to Kafka. If you are new to this series it might help to start with the first post. At this part of the series, we have a lot of raw tracking events in our Kafka topic. We could already use this topic to store the raw loglines to our Hadoop cluster or a database. But it would be much easier later on to do some additional processing to make our life a litte easier. Since Python is the data science language today we will be using that language. The result will then be written to another Kafka topic for further processing […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 1: Architecture
Some time ago I wrote a litte series on how to amp up your log analytics activities. Ever since then I wanted to start another project building a fully fledged Analytics system with client side tracking and unlimited scalability out of OpenSource components. This is what this series is about, since I had some time to kill during Easter in isolation ? This time, we will be using a tracker on the browser or mobile app of our users instead of logfiles alone, which is called client side tracking. That will give us a lot more information about our visitors and allow for some cool new use cases. It also is similar to how tools like Adobe Analytics or Google Analytics work. The data we collect has then to be processed and stored for analysis and future use. As a client side tracker, we will be using the Snowplow tracker. […]