Legal Disclaimer: Data Privacy is a diverse and ever-changing topic. This makes it nearly impossible to give reliable recommendations to a broad audience. Please consult your company’s legal department on whether those ideas described here are feasible under your jurisdiction. If there has been one predominant topic in the web analytics space for the last couple of years, it surely is data privacy. GDPR is a thing in Europa, COPPA in the US, ITP on planet Apple, and cookie consent banners on every website. Conducting a safe data collection practice as a global business has become more and more challenging, pushing businesses to be more and more careful. Because of this landscape, a lot of businesses are looking for a “bullet-proof” way to analyze website users’s behavior. While Google Analytics is a data privacy nightmare, tools like Piwik Matomo try to justify their existence by claiming to be more privacy […]
Tag: Building your own Web Analytics
Building an Enterprise Grade OpenSource Web Analytics System – Part 7: Analytics Dashboard
This is the seventh part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are building an Analytics Dashboard in Kibana for our data in Elasticsearch. In the previous post we build the connection from Kafka to Elasticsearch and Clickhouse to store the data. If you are new to this series it might help to start with the first post. We have come a long way in this series. We built everything from the client implementation with Snowplow to the processing and enrichment pipelines with Kafka and Python and stored all the data in Elasticsearch. Now it is time to make that data accessible in an appealing way to analysts and business users. The obvious solution for Elasticsearch is Kibana, which is developed by the same company and is designed to work perfectly with Elasticsearch! Webanalytics Dashboard in Kibana In Kibana, […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 6: Data Storage
This is the sixth part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are taking a brief look on what we can do with the data we collected and processed with Clickhouse. In the previous post we built a persisted visitor profile for our visitors with Python and Redis. If you are new to this series it might help to start with the first post. During this series we defined multiple topics within Kafka. Now we have different levels of processing and persistence available. If we want to keep any of it, we should put it in a persistent storage like a Data Lake with Hadoop or a Database. For this project, we are using Elasticsearch and dipping our toes in a database called Clickhouse for fun! Feeding Data into Elasticsearch From the previous part, we have a nice Kafka […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 5: Visitor Profile
This is the fifth part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are going to build a visitor profile to persist some of the data we track with Python and Redis. In the last post we processed the raw data using Python and wrote it back to Kafka. If you are new to this series it might help to start with the first post. Now that we have a nice processed version of our events, we want to remember certain things about our users. To do this, we are going to create a Visitor Profile in Redis as high performance storage. The process for persisting values will look like this: Building our Visitor Profile First things in this part, we are setting up a little helper script that will take our processed tracking events and flatten them. It looks […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 4: Data Processing
This is the fourth part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are building the processing layer to work with our raw log lines. In the last post we used Nginx and Filebeat to write our tracking events to Kafka. If you are new to this series it might help to start with the first post. At this part of the series, we have a lot of raw tracking events in our Kafka topic. We could already use this topic to store the raw loglines to our Hadoop cluster or a database. But it would be much easier later on to do some additional processing to make our life a litte easier. Since Python is the data science language today we will be using that language. The result will then be written to another Kafka topic for further processing […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 3: Data Collection
This is the third part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are setting up the tracking backend with Nginx and Filebeat. In the last post we took care of the client side implementation of Snowplow Analytics. If you are new to this series it might help to start with the first post. Now that we have a lot of data that is being sent from our clients, we need to build a backend to take care of all the events we want. Since we are sending our requests unencoded via GET, we can just configure our web server to write all requests to a logfile and send them off to the processing layer. Configuring Nginx with Filebeat In our last project we used a configuration just like the one we need. As web server, we used and will […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 2: Client Tracking
This is the second part of a seven-part-series explaining how to build an Enterprise Grade OpenSource Web Analytics System. In this post we are setting up the Client Tracking using the Javascript tracker from Snowplow Analytics. In the last post we took a look at the system architecture that we are going to build. If you are new to this series it might help to start with the first post. When building a mature Web Analytics system yourself, the first step is to build some function into your app or website to enable sending events to the backend analytics system. This is called client side tracking, since we rely on the application to send us events instead of looking at logfiles alone. For this series we are going to look at website tracking specifically, but the same principles apply to mobile apps or even server side tracking. Almost every mature […]
Building an Enterprise Grade OpenSource Web Analytics System – Part 1: Architecture
Some time ago I wrote a litte series on how to amp up your log analytics activities. Ever since then I wanted to start another project building a fully fledged Analytics system with client side tracking and unlimited scalability out of OpenSource components. This is what this series is about, since I had some time to kill during Easter in isolation ? This time, we will be using a tracker on the browser or mobile app of our users instead of logfiles alone, which is called client side tracking. That will give us a lot more information about our visitors and allow for some cool new use cases. It also is similar to how tools like Adobe Analytics or Google Analytics work. The data we collect has then to be processed and stored for analysis and future use. As a client side tracker, we will be using the Snowplow tracker. […]
Building your own Web Analytics from Log Files – Part 6: Conclusion
This is the sixth part of the six-part-series “Building your own Web Analytics from Log Files”. In this series we built a rather sophisticated logging and tracking functionality for our website. We used OpenResty to identify and fingerprint our users via cookies, stored that information to log files which were shipped to Elasticsearch and visualized with Kibana. Web Analytics democratized By using those techniques, we are able to use what we already have (log file processing) to answer questions about our users. Under best conditions this doesn’t even lead to a bigger technical footprint. This way we can have deep insights into our user behavior without external tools. Even as a startup or hobby developer you are now able to put the user first on your digital platforms. Next steps While this series is done for now we have a starting point to further build our platform. With some frontend […]
Building your own Web Analytics from Log Files – Part 5: Building our first Dashboard
This is the fifth part of the six-part-series “Building your own Web Analytics from Log Files”. At this part of the series we have our log files in Elasticsearch with indices like “custom-filebeat-tracking-logs-7.4.0-2020.01.03”. First thing is to set up a Kibana index pattern for this. Kibana Configuration In Kibana we go to Management -> Index Patterns -> Create index pattern. As Index pattern we use “custom-filebeat-tracking-logs-*”, which gives us all the indices with our daily index pattern. In the next step, we set the Time Filter field name to “@timestamp”. This is the timestamp that marks the point where Filebeat indexed the document. This is fine for now, we click “Create index pattern” and are done with this part! Checking our Data Now, let’s head to the Discover section in Kibana and look at our index pattern. And there it is: Our log entries show up like we wanted: This […]
Building your own Web Analytics from Log Files – Part 4: Data Collection and Processing
This is the fourth part of the six-part-series “Building your own Web Analytics from Log Files”. Legal Disclaimer: This post describes how to identify and track the users on your website using cookies, IP adresses and browser fingerprinting. The information and process described here may be subject to data privacy regulations under your legislation. It is your responsibility to comply with all regulations. Please educate yourself if things like GDPR apply to your use case (which is very likely), and act responsibly. In the last part we have built a configuration for OpenResty to generate user and session IDs and store them in browser cookies. Now we need a way to actually log and collect those IDs together with the requests our web server handles. OpenResty Configuration To be able to log our custom variables we need to announce them to Nginx. This is done right in the server-part of […]
Building your own Web Analytics from Log Files – Part 3: Setting up Nginx with OpenResty
This is the third part of the six-part-series “Building your own Web Analytics from Log Files”. Legal Disclaimer: This post describes how to identify and track the users on your website using cookies and browser fingerprinting. The information and process described here may be subject to data privacy regulations under your legislation. It is your responsibility to comply with all regulations. Please educate yourself if things like GDPR apply to your use case (which is very likely), and act responsibly. Identifying Users and Sessions One of our goals for this project is to be able to tell how many people are using our site. This means we need a way to differentiate between the users on our site. One approach would be to look at the IP addresses of our users. This is not very precise since all devices with the same internet connection share an IP address. Especially for […]
Building your own Web Analytics from Log Files – Part 2: Architecture
This is the second part of the six-part-series “Building your own Web Analytics from Log Files”. Architecture Overview To start of this series, let’s remember what we want to achieve: We want to enable a deeper understanding of our website users by enriching and processing the log files we already collect. This article looks at the components we need for this and how to make our life as easy as possible. To achieve our goal, we need to teach our web server to identify our users, store information about the activity in the log files, ship those files to storage and make it actionable with a way of visualizing it. Because I believe in Open Source Software, we will look at our options among that category. Another requirement is to introduce as less components as possible and keep scalability in mind. Choosing our Web Server The first part of our […]
Building your own Web Analytics from Log Files – Part 1: Motivation
This is the first part of the six-part-series “Building your own Web Analytics from Log Files”. What is Web Analytics As the owner or administrator of a website, you will go through different phases of maturity. When you are just starting with a hobby or web project, you will most likely care about the technical setup and gaining traction. Once everything is up and running, you will start asking yourself questions like How many People are using my website? How many of those are new Visitors? Which page on my website attracts the most (new) Visitors? Those questions are Web Analytics questions. It is what Web Analysts spent their time on to deliver value to the business behind it. To achieve that, we most commonly use tools like Piwik (Matomo), Google Analytics, or Adobe Analytics. Those tools rely on some Javascript code that needs to be integrated on a website […]