Krzysztof Witczak

Thoughtworks Technology Radar analysis

November 26, 2023

The Technology Radar from Thoughtworks started in 2010 and since then it was consistently being updated 2-3 times, every year. The newest version comes from September and it’s already the 29th volume of the series! 👏

Radar is often remembered because of its eye-catching visual structure.


Let’s recap on what it means, since usually people know only part of it:

  • Blips are the dots which refer to interesting things being monitored.
  • Quadrants represent different kinds of blips: techniques, tools, platforms, languages and frameworks.
  • Rings indicate suggested state of adoption lifecycle.

    • Adopt - others should go for it, these ideas have proven themselves in multiple contexts
    • Trial - it’s probably good, and worth trying on pilot projects that may handle the risk
    • Assess - often has new ideas, worth exploring to understand better because it’s unknown yet how useful they are
    • Hold - don’t stop immediately, but proceed with caution, consider sunsetting in future
    • Blips also have shapes around them which have their meanings related to adoption and rings:

      • Just a blip - no change, it was here in the last volume too
      • Circle around a blip - a new addition from this volume
      • Quarter of a circle on a blip - it can point towards the circle which means it’s moving in (it’s getting better or safer to adopt this time than in the last volume) or it can move out if it points out of the circle.
      • Additionally, if the blip hasn’t moved for some time already, it won’t be placed in the newest radar - so it won’t blur the picture. In the end, radar is focused on the future/new tech/changes.

Yay, that’s a lot, no wonder people keep forgetting. Additionally, please remember that Thoughtworks radar is opinionated and based on their own experience, company projects and employee suggestions. It’s not a market research. Thoughtworks suggests other companies create their radars as well as an exercise to start a conversation and assess your technology portfolio, check alignment and improve your technological vision. Thoughtworks suggests doing such exercise twice a year.

However, a common practice in the industry is to review the Thoughtworks tech radar to see what they recommend and if anything is interesting that our companies could try too. Let’s do exactly that.

So what’s in adopt?

I like to start with blips in this section because they are usually great ideas worth being hyped about! There are in total 8 blips in the adopted section.

Initially, I was a bit surprised to see Design Systems on the list, because I thought it was already a well-known concept, used by many teams, including my own. However, when connected with another blip - #17 - Design system decision records - it’s clear that there are more things to learn here. I never thought of applying a similar concept as ADRs to a design system, but it makes sense - especially if you maintain it for years, it undergoes bigger changes in direction, or you simply want to have better clarity on how future components should look or behave. Additionally, Thoughtworks mentioned the benefit of driving the product mindset for teams developing Design Systems. I’m not sure if I see that in my teams.

I’ve heard about RFCs from multiple sources, but I assumed that if we have ADRs in place, it’s essentially the same idea. This is not always the case, because you may use RFCs in more contexts, not only architecture. The core idea is that you want to have a structured, transparent form of gathering feedback and it’s enclosed within a time frame, after which you should move on with the decision. I like it because quick decision-making is a big problem in many organizations that also want to give voice to their people and they look for consensus. We also use dbt, Mermaid and recently Playwright and we are happy with all of these, especially the last one. Ruff caught my attention because it’s faster by an order of magnitude when compared with other litres, which is insane! Finally, Snyk took me by surprise, but only in the beginning.

The reason why Snyk is being mentioned comes from the fact that at least 5 code assistance tools were brought into this radar (they are all in trial or assess):

As great as they are, it seems that they unfortunately may contribute to a new type of attack, related to generated code. It’s mentioned in the blip #16 - Dependency health checks to counter package hallucinations. It’s similar to typosquatting or masquerading and may be increasingly risky in upcoming years as more and more engineers will use AI assistants prone to hallucination. What it is in short? You ask an AI assistant to do something, and they guide you to install dependency. Unfortunately, that dependency does not exist, it was hallucinated. Normally, if you add it, your package manager throws an error and the party is over. However, a hacker who is aware of frequent mistakes of popular AI models can place an infected package based on the hallucinated name… and suddenly your hallucinated package may download! 🙈 It’s crazy that right now, Vulcan tested this vector of attack and in Python out of 227 questions asked, 80 contained unpublished packages… 😅 I’m not sure when someone will place a bad apple in one of those 80, but time will tell.


As I’ve mentioned in my other article, greater usage of code-generation tools will bring its blessings but also new problems and threats that we need to think about in advance. This is one of them, and Snyk Advisor may combat this, followed pretty soon by more security-analysis 3rd parties.

Aaand what is a bit risky and it’s in Hold?

There are only two things here, both interesting:

OWASP is a bit of a wink 😉 because it’s a double negation - so what it tells is that you should pay attention to OWASP. That may seem like basic advice for senior engineers… but in reality, I think it’s spot on because OWASP offers much more than the popular “TOP 10” list. There are many lists. Check these out:

Thoughtworks claims that OWASP has a lot of good recommendations but at the same time, it’s heavily underused, and I think they’re right.

Web Components and SRR is a well-known problem, in short - if you use either one of those technologies, you’re happy, but once you try to mix them, it becomes hell and you’ll need to fight so-called flashes of unstyled content. For Thoughtworks, this problem alone is a reason to drop the usage of Web Components in many projects…😔

Other blips?

There are 89 more blips…! 😂 That’s too many to describe here. I’ll try to group the ones that caught my attention into themes and review them together.

Security theme

There were many blips mentioned related to security - from all quadrants.

The #4 - Attack path analysis is not a new thing in the industry, but it became much easier to do due to new security-monitoring technologies mentioned as other blips like #30 - Orca or #32 - Wiz. Of course, if you can afford it! Another technique was brought from the past, this time from the 1940s - #11 - Risk-based failure modelling. It’s pretty simple: you start by exploring how your system may fail and then you try to estimate if you can accept this risk, or if you should mitigate it somehow. I think it relates nicely with a technique from the past radars that we use at GAT, which is Threat Modelling. I didn’t try yet to combine these but I think I’m gonna do it in the future.


Additionally, #5 - Automatic merging of dependency update PRs was mentioned. We’re not yet here at GAT, although we use Renovate to create automatic dependency updates MRs for us, which we then manually update. It’s pretty annoying sometimes because it may be spammy, but at the same time, we’ve found already situations where manual intervention to the generated MR was necessary to avoid issues on production.

#15 - Zero trust security for CI/CD and #7 - OIDC for GitHub Actions both relate to the similar problem - that our CI/CD pipelines usually require critical secrets to operate properly, and because of that they are a priority target for many malicious organizations. It’s easy to make those secrets exposed by mistake or lack of knowledge, especially since DevOps tooling and best practices evolve so fast. Many people haven’t tried GitHub Actions in the last year or two, so it’s not difficult to believe they may not be aware of how to secure them correctly, even if it’s easy with OIDC. We need more awareness in this area.

When I mentioned many DevOps tools, it’s worth to list those as well:

  • #48 - cdk-nag - reports security and compliance issues in AWS CDK apps or CloudFormations templates.
  • #49 - Checkov - static security scanner for IaC.
  • #64 - Prisma runtime defence - builds a model of container expected behaviour and then detects anomalies during runtime, to report if it suspects that the system may be under attack.

AI - of course!

Despite Thoughtworks originating from CI/CD tooling and DevOps, AI tools may be the biggest pool in the entire report. I’m not surprised, since this is a theme of the entire 2023… 😂


First of all, we have #21 - Self-hosted LLMs. There may be many reasons why a company may decide to self-host instead of using a cloud provider, but most commonly it would be due to one of these:

  • Security, privacy, compliance - data won’t leave their system and it’s safe.
  • Performance, fine-tuning, control - tailor it to your needs.
  • Cost optimization - I don’t have hard data on this but I guess we can define a threshold of prompt usage at which point it’s more cost-effective for you to self-host.

With #75 - Llama 2 in place, it may be easier than ever before to try it for almost every organization. Of course, there is a debate if this project is open-source, but there will be alternatives. Once you decide to do it, tools like #99 - GPTCache will allow you to reduce resource usage and make it more viable. It’s around a year from the AI boom now and we already have so many possibilities - it’s insane. What tooling will we have in a year, or two?


Additionally, more advanced techniques of working with AI like #9 - ReAct prompting or #10 - Retrieval-Augmented Generation (RAG) are becoming more popular and widely known. There are fantastic resources online with weekly updates like and amazing open source tools like #103 - LangChain and #104 - LlamaIndex that make it easier to use those techniques with popular LLM models. If you want to self-host you need to store somewhere the vector data, and this is where new vector databases come into play - #37 - Chroma, #39 - pgvector and of course #40 - Pinecone.

Finally, many of us who try using prompting in production discovered (in pain) that a small update in OpenAI may change how the model reacts to our prompt, and it’s easy to simply miss that fact. This is where prompt testing tools come in handy, for example #105 - promptfoo, which allows you to write automated tests for prompts with specific kinds of assertions and nice UI dashboard. Neat!

Organization and others

In this theme, we will start with #13 - Tracking health over debt and #55 - DX DevEx 360 together. I started checking health more than a year ago and I discovered that a simple Spotify Health Check or Atlassian Health Monitor - both known for years - may unravel interesting problems in teams. Once you do this exercise with all of your teams, suddenly you may have a health map of the organization which is a great tool to prioritize the next big cross-team improvements, and it’s easier to explain to stakeholders. In the case of DevEx I explained this approach in one of my articles from this year as well, but what’s important is that Thoughtworks deliberately mentions the survey tool sold by DX company. I agree it looks promising, but I’m worried that pricing may be too high right now - many organizations will conclude they may have a slightly worse experience with Google Forms, but it may already be a big differentiator. Only if they fail to gain quality insights, they may come back to the DX tool.

Another important blip was #52 - Cloud Carbon Footprint. I see it both as a tool (created by Thoughtworks by the way, so it’s a bit of advertisement 😜) and also as practice/awareness. You can see that many cloud providers started showing up carbon footprint of running their services (AWS example) to raise awareness and allow customers to optimize around this metric to help the environment. The tool mentioned here allows you to estimate your carbon footprint within your software. I think it’s an important topic because with the recent AI boom, many companies may self-host GPU-intensive LLMs which makes things worse… I also wonder how OpenAI and other AI-first companies will act in 2024 to combat this problem. Ignore it? Maybe increase prices or tax them additionally? The future will tell… 🤔

Finally, a tool #61 - mob may be an interesting thing to try for pair and mob-programming sessions. I’m happy to see more solutions like this one, since they may change a lot for remote-first companies.

Frontend, UI/UX Design

A couple of blips caught my attention. Starting from #3 Accessibility-aware component test design, because lately, we have thought about making our website WCAG compliant. The recommendation is to go beyond the usage of frameworks like chai-a11y-axe but also include accessibility into your normal testing practices (like it was recommended by many specialists in the past) by for example prioritizing identification of elements by ARIA instead of test-id or other approaches.

The most interesting one was #38 - Kraftful which is “a self-described copilot for product builders…” and when filled with enough data, it can identify patterns and themes to propose feature requests and even generate JIRA tickets for your! This tool is trying to automate product discovery based on user feedback.

Additionally, #29 - Lokalise and #100 - Grammatical Inflection API may be two interesting tools to avoid embarrassing translation errors in your apps. The first one is a product that makes it easy to translate your translation .yml files and pass them to reviewers for a second round of feedback, while the second can automatically correct words for you that form differently based on gender (read more).


There were plenty of interesting blips in this section, so I will just quickly enlist them grouped by sub-theme.

Tools that may help you out through better infrastructure automation, generation or robustness:

Things that may cause Dev’s life to be better overall:

  • #54 - Devbox (tool) - makes it easier for you to create per-project environments straight from your terminal. Caught my attention and I plan to try it out on Windows WSL as an experiment.
  • #70 - GitHub merge queue (tool) - self-explanatory. Why GitLab does not have it? 😭
  • #72 - Google Cloud Workstations (tool) - I see more and more products that offer cloud environments. I’m tempted to try this out since lately, 16 GB has become a problem to run local environments.


  • #14 - Unit testing for alerting rules (technique) - love the idea, because with more monitoring lately it becomes easy to pollute ourselves with false-alarms.
  • #50 - Chromatic (tool) - a component snapshot tool to let you know if UI got a regression or not.
  • #63 - Mocks server (tool) - Chamaeleon in its logo is not by accident, because this tool mimics/records API responses and acts as your mock server that you can use in e2e tests. Seems easy to use and powerful!

Finally, I’ll end up with #25 - CloudEvents which is a sort of an agreement, or a specification to solve a common problem - that events that we use in our systems differ a lot in terms of what’s in them and what’s considered to be the best practice, and as a result it’s difficult to keep those events working with different 3rd parties and cloud providers. CloudEvents aims to standardize this.


In the end in this blog, I covered less than 60% of all blips in the report, so I guess that tells you how much stuff changes every quarter in IT. It’s difficult to keep up even if many of those blips have been with us for years! I’ve noticed a new trend - that many people like to bash on Tech Radar - claiming that they don’t agree with the selection.

That’s fine - I guess the goal of the radar in the first place is to facilitate conversations about technology. Thoughtworks radar IS opinionated by design. We may disagree on what is hot or trendy, recommended or halted. However, this radar raises awareness and shows interesting approaches and tools for people to read about and tinker later. That’s the beauty of it! I’m sure that within over 100 blips everyone can find something new, including myself 😊