Krzysztof Witczak

Encourage your engineers to try AI Assistants

April 26, 2023

The entire software engineering community speaks about AI right now. Some of us do that out of curiosity or hype, others out of fear, and some seek true productivity boosts on the bleeding edge of technology… 🚀

This month a Reddit post was published about news in ChatGPT. It has 94 links about updates made during a single week. Do you think that’s a lot and we are getting crazy hyped? Ten days ago another update was posted, with 154 links followed by another one four days ago with 69 links. Many of these are not just raw updates, experiments or articles but free tools that you can try and use today. This is just a single month. It’s really hard to keep up - even if you are an AI specialist.

We are getting crazy about what ChatGPT 4.0 can do compared to 3.5 (if you didn’t watch it yet, I highly recommend a presentation made by Sebastian Bubeck elaborating about the difference - it hit 1.4 mln views in less than two weeks, for a reason). Not enough? For some, ChatGPT already passes the Turing test for a normal day to day usage. Additionally, a next version - ChatGPT 5 - was announced to be released in December this year.

Of course, the bleeding edge of tech usually looks exactly like this - majority of this news won’t survive the test of time, may be disappointing in the end or not be really practical.

However, it would be unwise to ignore it.

How changes may start to sink into the workplace

As it was with many other achievements in IT across the years, we will follow the adoption curve. The majority of us, engineers, won’t use any AI assistants for a year or two besides doing some short experiments, and at some point, we may realize that ecosystem has changed and if we won’t learn it soon, we will start becoming a minority in the community. At that point, we won’t have much time to catch up.

I think that the biggest trigger may happen if productivity boost will become apparent across the company. I don’t want to spoil too much, but Gergely Orosz posted a deep dive on the topic, with data suggesting that over 70% of engineers who use AI coding tools notice significant productivity improvement already, despite we are in a very early phase of these tools and they are far from being battle tested. If you or your manager will notice that those who use AI-supporting tools during coding get the job done faster, it will build pressure on others to adapt, get noticed, get promoted and so on.

Finally, managers may solidify the need to learn these tools by introducing prompt engineer job offers, which is already happening, and further demanding these skills from other roles on their career growth matrix. It’s absolutely common these days to expect that every software engineer is an expert at finding information online to get their problems solved quickly and effectively. Great communication skills are also expected. Why working with an relatively inexpensive AI assistant wouldn’t be on that list?

Where problems may be buried

The problem may be that working with an AI assistant it’s a bit different than using another framework or a language. It’s almost like teaching software engineers to delegate effectively and communicate more precisely. The level of going out of your comfort zone is more difficult.

When an AI tool will solve a problem in 10 seconds that you tried to untangle for the last 3 hours, impostor syndrome may kick in - professionals may start to question if it’s worth learning, reading and memorising technical aspects when AI “covers” this topic. It may sound like a waste of time.

That leads us to a hypothetical problem of being dependent on the tool. It may sound like a distant issue, but I’m pretty certain that we will quickly notice engineers going to some extreme approaches:

  • Some professionals will spend a lot of time nitpicking on AI-generated code to prove it’s unsafe, not production ready or not scalable, bringing topics of quality and safety over speed, and turning down opportunities it may bring. Their emotional attachment to the code will increase even further and they will try to prove their expertise is needed.
  • Other professionals will do exactly the opposite - they will ship code like crazy, creating new services, websites and tools that they don’t fully understand nor tested, claiming that they can correct it so quickly that such an approach is justified. Their ownership over code quality will diminish, as their craftsmanship. They may interact interestingly with the previous group because they may overflow them with a crappy code which needs to be reviewed carefully. Suddenly reading code will be even more important than before, because levels of trust will be lower than they used to be.

I hope the common sense in the middle will be the biggest group - people who will use AI tools but still feel responsibility for the quality of the solution, its safety and robustness. Their approach will be to use AI to iterate over different solutions faster, create prototypes easier and reduce the time they spend on writing boilerplate code.

What can we do

I’d say we should try to tame the beast as quickly as possible. If you or our engineers have a technical learning goal for this year to accelerate in your career, like learning a new framework, programming language and so on - why not do that with an AI coding tool to support you? 🤔

Maybe instead of following a programming language course, you can try to build a quick copy of a website using ChatGPT and reverse-engineer it? What if you would challenge yourself to create a web app in a new framework as soon as possible, using the AI tool of your choice instead of following a tutorial?

There are many possibilities to train your staff on it, and the best one is to lead by example. Try it first yourself, make a tech talk out of it, and share what you have learned. I’m excited about the following tools, which I plan to play with soon:

Besides practical familiarity and understanding practically current AI limitations, there is another aspect of educating your staff - awareness about safety of using AI tooling. There are reports already that people share confidential data to ChatGPT - it’s another security risk that we need to add to our ever-growing list. Using self-hosted solutions may sound safe on the surface, but it may lead to vulnerabilities and poisonous attacks too, so it’s not a topic that we can ignore and pass on.

Happy learning! 💪

Oh, you expected that I will say this post was written by ChatGPT 4.0? No! 😂

Not yet.