Arda Çetinkaya Thoughts on software, with the occasional personal rambling…

Building a Project with AI: My Experience with Agentic Development 

/ Building a Project with AI: My Experience with Agentic Development  için yorumlar kapalı / ~ 6 dakikada okuyabilirsiniz.

A few days ago, I started a small project called HollyDayz — a team holiday management application as a single-page application (SPA) with Reactjs. Nothing too fancy, nothing rocket science. But the way I built it was a little different from my usual approach. I decided to let AI do most of the work. Not just the boring parts, but almost everything; the code, the database setup, the deployment configuration, documentation. Every artifact created within the software development process, I mean agentic development process… 

In this post, I want to share how I set things up to make that possible, and what tools made it work. I am not going to go deep into the technical details of the app itself. Instead, I want to focus on the setup and some common foundation that made agentic development possible. My hope is that after reading this, you can try something similar in your own projects. 

And just a note; I just wanted to demonstrate how pieces can be created for this puzzle (agentic development). Those are just some examples. 

Here is the full repository: https://github.com/ardacetinkaya/hollydayz

 Tool: VS Code and GitHub Copilot coding agent 

There are lots of text editors/IDEs/tools with built-in support for a chat agent. I have chosen VS Code and its chat agent with GitHub Copilot. I am a big fan of GitHub Copilot because it provides every frontier model. So, its coding agent has access to different LLMs within your requirements. And just a quick note: GitHub Copilot now also support for other coding agents such as Codex and Claude(https://github.blog/changelog/2026-02-04-claude-and-codex-are-now-available-in-public-preview-on-github/)  

But the main concept that I will present is like all other coding agents. So, if you are using Claude, the approach is the same… 

Coding agents are a curial part of agentic development. Because it is the way how you do interaction with LLMs, so, I would suggest you choose which makes you more comfortable. 

The Setup: Skills, Instructions, Custom Agents, MCPs and Context 

The first thing which is already obvious is that LLMs’ outcome is much better when you give it more context. Think of it like hiring a new developer on your team. If you just say, “build me a login page,” she/he will probably ask a hundred questions. But if you first show them the project structure, explain the tech stack, explain the coding style you prefer; they can work much more independently. That is exactly how agentic development also does better.  

So, first I added some skills to the coding agent.(.github/skills/**/*.md) For some simple PoC or demo kind of things for SPA, I am following some standards with react within years. I have assembled all my routines and asked ChatGPT to make it a more structured way to be used as a skill. So, I have created my first skill, spa, for my agent. Now my chat agent has a skill to create a SPA application as I define and guide. 

To be able to do a deployment, I have added another skill as a publisher. So, my coding agent is now able to do deployments to Vercel according to given guidance. Within skill, I define all required phases…I asked to use vercel cli, follow some steps…etc.  

Not in the beginning, but after some time, I have added another skill for database setup. I just want a coding agent to do some table re-create whenever required. So, from time to time, whenever I have asked, the coding agent can execute some defined queries for me.  

Briefly, skills are explicitly defined capabilities for coding agents. They are drawing some boundaries for LLMs within agentic development. They help coding agents to be more effective and reliable within your awareness.  

For skills, it is also possible to create explicit scripts that they can run. You can have some bash script or PowerShell script to do some tasks.  

Just some common note for skills; I would suggest revising skills whenever you learn/experience things. They should not be static files; they are some kinds of projections of your intents for how things should be done. They are a curricular part of agentic development. And they might be required differently per workspace. In this example, my “spa” skill for my agent has “native JavaScript” but for another SPA project, the skill might require having some “TypeScript”. If you want to have the same context for all your workspaces, you can have also more common-same skills for all your workspaces.  

For this project, I have added some MCP tools for my chat agent. So, within my prompts, the chat agent can use those tools. MCP tools are also a good way to make outputs much more reliable. Having a good orchestration of tools will help you a lot within agentic development. And MCP tools can also be explicitly defined in skills description. 

It is also possible to create some custom agents. So, a custom agent can have a different behavior. You can think as another developer in your team. You can again explicitly define its role and behaviors, so it can have a specific role within your development process. In this example, I have created two different agents. Tech-writer and test agent…Test agent basically does some UI actions to test the application. I have also defined some tools for specific actions. It can use playwright and it can also take screen shots when asked for specific reasons. I have another agent for some documentation, “tech-writer”. It helps with documentation.   

Custom agents help you define answers to “what” questions and skills help you define answers for “how” questions.  

Working Together: VS Code Chat and Copilot CLI 

Most of my interaction with Copilot happened in two places: VS Code Chat and Copilot CLI in the terminal. And it was same for this project also. In VS Code, I opened the chat panel and just described what I needed. Sometimes in full sentences, sometimes just a quick note. Because Copilot already had the context from skills, it understood what I meant and produced relevant, usable results most of the time. I rarely had to explain/ask the same thing twice. But mainly this was because my skills/intents/prompts are not well enough.  

Copolit CLI is almost same as VS Code Chat. I have also used it a lot. And there is also good collaboration between VS Code Chat and CLI because of agent types. GitHub Copilot Coding agent has different types of agents; local, background and cloud. You can imagine like: local is kind of a team member sitting with you, background is another member who has own desk and cloud is another team member works remotely. Having a usage with this awareness helps a lot in agentic development flow. 

GitHub Agentic Workflows: AI Inside Your CI/CD 

This is kind of a bonus for this post. Until now, all things that I have tried to explain are applicable to other coding agents, too. This part is a specific feature by GitHub CopilotGitHub Agentic Workflows is kind of a new, work-in-progress, feature from GitHub. It enables AI directly into GitHub Actions. I would suggest you check it. If you are doing some stuff with GitHub platform, it will boost a lot.  

For my tiny application, I wanted to also try it. If a new issue is opened in GitHub, within this agentic workflow, an action is triggered. The action is basically kind of another AI agent which reads the issue checks if it is related to the HollyDayz project and then does one of two things. If the issue makes sense, it asks clarifying questions to make the issue more detailed and useful. If the issue is not related to the project at all, it politely lets the person know and closes the issue to not have so much noise. This might sound simple, but this GitHub Agentic Workflows promises more that if you are using GitHub. Assigning some other users, creating sub-issues, assigning to some agents, creating a PR…etc. It aims to automize lots of concerns in the software development process. 

What I Learned and Why This Matters 

Looking back, the most important thing I experienced is that the setup is the skill. The actual prompting, the back and forth with the AI, is almost easy once you have the foundation right. I remember my old times when I joined a company or started a project, I set up for my development environment. I download and install additional tools, setup my IDE with preferred extensions and settings…Now, I think, setting up skills, agents, MCP tools, instructions are their kind of equivalent for those actions. Having a required set-up work agentic development is really important. 

Within this short journey, I also experienced that agentic development is not about replacing developers. It is just changing the current definition of “software developer”. The agentic development boosts you so you can focus on interesting decisions. I still made all the architectural choices. I still reviewed the code.  But I did not manually write every code. For me, writing code is never a problem. It is really big fun for me. I was thinking AI will do boring things and let us do more fun 😊 Now, with AI, I can code within every language, so I can really do more fun. I hope you got what I tried to say.  

With agentic development, AI handles lots of things, but a developer you need to handle the direction. And in this era, being able to know where to go and what is needed to go seems more important than knowing how to go… 

If you are a developer and you have not tried this kind of setup yet, I really encourage you to start small. Pick one project, add a basic instruction file, connect one or two skills, and just have a conversation with any coding agent about what you want to build. And then revise them to do much more and better. You will see and understand possibilities. 

Until next time, happy coding… 

Please check the following references for the things that I have mentioned. 

We, as developers, traditionally start building software to reduce uncertainty and enforce determinism for solutions and systems. Classic software engineering tries to have the same output for the same inputs.  While doing this, we build software with some explicit logic and traceable behaviors to make outcomes deterministic. 

But now we’re using non-deterministic systems (LLMs, agents) to build those deterministic solutions and systems. This is very interesting…or maybe not. 

As humans, we are also non-deterministic living beings. We trust people to build deterministic systems. So, maybe it is not so interesting to rely on AI to build software. In this aspect, AI tools/platforms are just a new collaborator.  

Who knows more: the one who reads a lot or the one who travels a lot?

There is a very cliche question; “Who knows more: the one who reads a lot or the one who travels a lot?” But good one to define AI tools. AI tools/platforms are just collaborators who read a lot; I mean “really” a lot. But they are not experiencing as someone who travelled a lot. So, they don’t have knowledge as we really feel. Because of AI’s nature, “reading a lot” does not mean they also know “a lot”. By nature, they are just curators. They just assemble some already produced pieces perfectly to generate an artifact. 

Having those artifacts and collaborations in our solutions changes some paradigms in software development. 

While developing software solutions, with humans, we can ask “why” or we can challenge “reasonings” (a.k.a assumptions) or we can manage trust.  

But with AI, we are getting confident-looking outputs. We cannot know or be sure how initial reasoning is done. At we cannot easily manage their reliability for similar occasions. 

Having AI generated artifacts might cause us to lose understanding of our own solutions.  

And because of this risk, having AI artifacts creates a new developer’s reasonability. As developers, until now, we have been authors of what code we write. But with AI collaborators, now we are more like explicit curators, debuggers and verifiers. Having some math formulas make posts more serious 😊 ; so, now we define “developers” as  

developer = curator+ debugger + (10 x verifier) 

Now it seems our role shifts more toward verifying correctness, operational characteristics, security concerns, domain constraints…etc.; lots of things… And those things are also some already decided and presented concerns while we are developing some solutions. Now, the way of providing those is changing. Previously we were implementing “those”, now we are verifying “those” if they are implemented as expected. 

developer = curator+ debugger + (10 x verifier) 

A paradigm shift…

It feels like a meaningful paradigm shift. Now we are moving from “I built this” to “I am responsible for this”. So, a developer is not just a coder, but they are some kind of governor for the solutions. And this change will introduce new challenges and new opportunities as well. But I am not sure if this will make developers live easier…Maybe this will make production easier but also responsibility heaver. 

To survive or to live with this new paradigm shift, as developer, I guess, we need to learn how things work much better than before.  Here are some key points that I think we need to challenge more; 

  • Knowing how LLMs work 
  • How skills, instructions, agents defined for effective results 
  • Getting strong about architectural patterns 
  • Core pillars and principles about the technology that we depend on 
  • Be good at writing, precise meanings and descriptions 
  • Assuming generated code as guilty until proven innocent 
  • Never ending refactoring
  • Making everything visible, no hidden or unknown parts
  • Treat “prompts” as code 
  • Much more test cases and scenarios 
  • Cost; both operational and financial cost of having AI collaborators 

Other than tools and processes, this new paradigm shift will also change the behaviors of developers. Let’s see how we as developers will evolve within time…We may write less code, but we may need to understand more than ever. Maybe the real skill of the future developer is not writing code, but understanding responsibility. 

Let’s see…

In software development, I strongly believe that making things visible is one of the most important practices for building the right solution. By “visible,” I mean things that people can see, discuss, and react to — not only written requirements or ideas kept in documents or in our heads…etc. 

Very often, developers start with abstract designs or detailed internal structures. But in my experience, it is usually more effective to begin with UI fieldsAPI contracts, or integration models. More visible parts…These parts of the system interact directly with users or other systems. Because of that, they provide real and early feedback. Making interaction channels more open in the beginning, helps software solutions to be more effective. When something is visible, misunderstandings appear faster. Gaps become clearer. Learning starts earlier. 

Visible Assumptions

Assumptions are another area where visibility matters a lot. In an ideal situation, we would always work with real data and facts. We aim to create deterministic solutions. But in reality — especially in agile environments, where things change quickly — data is often incomplete or missing or changes a lot. In such cases, assumptions are unavoidable to start doing something. 

I don’t like making assumptions, but I evolve by time and learn also. The real risk is not making assumptions. The real risk is hiding them

When assumptions are written down and shared, they become open for discussion and validation. Even wrong assumptions are useful if they are visible, because they can be corrected early. This reduces rework and improves decision-making over time. Sometimes even product owners/managers cannot have a clear view. Making assumptions visible also helps people to ask correct questions and clear the view.  

Visibility for Safety

Making things visible also creates safety within teams. When work, decisions, and uncertainties are visible, teams understand not only what they are building, but also how and why. This clarity increases self-confidence. And when confidence grows, ownership naturally follows. Teams that feel safe and confident tend to take better care of quality. Visibility supports this by reducing surprises, fear, and hidden expectations. 

Visible Communication

Visibility is equally important in communication with stakeholders

A common example that I see a lot is capacity planning. Many teams plan with 80% capacity instead of 100% to allow time for support, meetings, and operational work. This is realistic. However, often only the planned work is visible, while the remaining capacity is hidden. When this invisible work is not communicated, stakeholders may misunderstand progress or delivery speed. Making this work visible — or at least explicitly stating that it exists — helps everyone align around reality instead of assumptions. 

At the same time, visibility must be useful and intentional

Not everything needs to be visible to everyone. Too much information creates noise. What matters is: 

  • The right information is visible to the right stakeholders 
  • Stakeholders share a same understanding of the situation 
  • If something cannot be shared, people are at least aware that it exists 

This kind of visibility builds trust without overwhelming people. 

For me, making things visible is not about control or micromanagement. It is about shared understanding, safety, and ownership. I believe that those are some big key values for a effective software development team.

When things are visible: 

  • Feedback comes earlier 
  • Assumptions are challenged faster 
  • Teams feel safer and more confident 
  • Ownership and quality improve 

That is why, in solution, project, and product development, I always try to encourage teams making things visible in software development — even when it feels uncomfortable. 

Because visibility is where learning, trust, and better software begin. 

Son 1-2 sene fark ettiğiniz veya içinde de bulunduğunuz gibi, yapay zeka (AI) becerilerinin etkili ve verimli olabilmek için kullanıldığı ve kritik hale geldiği bir dönemdeyiz. Günümüzde işletmeler/kurumlar, büyük dil modellerini (LLM – Large Language Models) kullanarak fayda sağlamaya çalışıyorlar. Sadece bazı LLM servisleriyle verimlilik kazanmakla kalmayıp, işletmeler artık kendi iş değerlerini bu modellere entegre ederek daha etkili olmanın yollarını arıyorlar. Bu yazımda, LLM’leri kullanarak iş değeri üretme stratejilerinden bahsedeceğim. Şu anda iki sıcak ve önemli konu var: Retrieval-Augmented Generation (RAG) ve Fine-Tuning… Her ikisi de üretken yapay zeka (generative AI) konsepti içinde işletmelere özel verileri yapay zeka dil modelleri ile entegre etmenin birer stratejisi. Bu iki yöntemle işletmeler, yapay zeka kapsamında iş değerlerini daha verimli şekilde sağlayabilirler veya sunabilirler. Tabii her yöntemin avantajları ve dezavantajları var; ve hangi yöntemin tercih edilebileceği de ihtiyaçlara göre değişiyor.

Bu yazıyı 27 Nisan 2025 tarihinde ilk olarak İngilizce yazmıştım, buradan o versiyonuna da ulaşabilirsiniz. Bayadır kendi sayfamda yazmaya fırsat bulamıyorum. Farklı platformlarda yazdığım yazıların birer kopyasını da buraya taşıyabilirim, kendi arşivime de katkı…😁🫣

Retrieval-Augmented Generation (RAG)

RAG, bir yapay zeka modelinin bir soruya cevap vermeden önce büyük bir veri deposundan gerekli ek bilgi çekmesi yöntemidir. Açıklamak direkt Türkçeye çevirmekten daha kolay.. Daha basit bir ifadeyle, AI “Dur bir bakim, önce bir Google’layim…” der gibi davranır. Buradaki “Google’lamak”, tanımlı veri parçalarının bulunduğu bir veri deposunda hızlıca arama yapmak şeklinde yorumlanabilir.

Bu bağlamda, veri deposu organizasyona özgü verileri, yani işletmenin değerlerini temsil eden bilgileri içerir. RAG stratejisinin isminden de anlaşılacağı üzere iki adımı vardır: “Retrieval” (Veri çekme) ve “Augmented” (Zenginleştirme). Öncelikle, veriler bazı algoritmalarla veri deposundan çekilir. Ardından, çekilen sonuçlar LLM kullanılarak zenginleştirilir ve daha etkili cevaplar elde edilir. Dolayısıyla, LLM’e iyi veri sunmak için “retrieval” (veri çekme) adımı oldukça önemlidir. Bu verileri doğru ve alakalı şekilde çekebilmek için verilerin uygun bir şekilde hazırlanması önemli bir adımdır. Bu adımda vektör veri tabanları (vector databases) devreye girer.

Vektör veri tabanları, verileri sayı dizileri (vektörler) olarak saklar. Bu vektörler daha sonra benzerliklerine göre sorgulanabilir. Böylece benzer veriler çok daha verimli şekilde bulunabilir.

Verilerdeki benzerlikleri anlayabilmek için LLM’ler kullanılarak bu verilerden benzerlikler çıkarılır. İşletmeye ait veriler vektör veri tabanına aktarılırken embedding (gömülü temsil) formatına çevrilir. Örneğin, benzer anlamlara sahip kelimeler (“kedi” ve “köpek” gibi) birbirine yakın vektörlere sahip olurken, alakasız kelimeler (“kedi” ve “araba” gibi) birbirinden uzak olur.

Embedding işlemi için bazı özel LLM modelleri kullanılır. Böylece veriler vektör veri tabanına daha anlamlı ve ilişkili olarak gömülür. Daha sonra, bir soru geldiğinde bu soru da embedding’e çevrilir ve veri tabanından en alakalı veri parçaları çekilip LLM ile zenginleştirilerek daha güvenilir cevaplar sunulur.

RAG stratejisini daha anlaşılır hale getirmek için örnek bir süreç oluşturalım:

  • Adım 1: Tüm iş verilerini topla
    • PDF, Word, web sayfaları, veri kayıtları… vb. Belgeler, user-stotry’ler, destek kayıtları, wiki içerikleri olabilir.
  • Adım 2: Veriyi hazırla
    • Verileri temizle; tekrarlayan verileri kaldır, büyük belgeleri küçük parçalara böl (Bu, AI’ın veriyi daha kolay işlemesi ve embedding oluşturması için faydalıdır)
  • Adım 3: Etiket ekle (opsiyonel)
    • Veri parçalarına etiket/etiketler eklemek sistemi daha verimli hale getirir. Sonuçta tüm bunları işimizi daha etkili hale getirmek için yapıyoruz, değil mi? 😀
  • Adım 4: Verileri embedding formatına çevir
    • Bir embedding modeli kullanarak veriyi benzerliklerine göre vektörlere dönüştür.
  • Adım 5: Vektörleri vektör veri tabanında sakla
    • Couchbase, MongoDB, Azure Cosmos DB, PostgreSQL gibi veri tabanlarında veriyi ve vektörleri sakla.
  • Adım 6: Soruya göre veriyi zenginleştir
    • Soru, bir LLM modeliyle embedding’e çevrilir
    • Vektör veri tabanında benzer embedding’ler aranır, bu noktada kullanılan veri tabanı önemli
    • Benzer veri parçaları çekilir
    • Çekilen veri LLM ile birleştirilerek anlamlı cevap oluşturulur

RAG stratejisi, var olan bir çözüm için ekstra bir eklenti gibi diyebilirim. Ekstra LLM eğitimi veya büyük bir hesaplama gücüne gerek olmaması artı noktası. Zor olan kısım veri hazırlığı olabilir diye düşünüyorum. Zor değil, ama veri yapısı düzensizse biraz uğraştırabilir. LLM çıktılarının etkinliği için verinin düzgün yorumlanabilir olması oldukça önemli.

Veri bir kez düzenli hale getirildikten sonra, RAG stratejisi verinin güncellenmesi/değişmesi konusunda da büyük kolaylık sağlar. Veri değiştikçe sadece veri deposunun güncellenmesi yeterlidir. Bu da daha güncel sonuçlar sağlar.

Fine-tuning (Modeli İnce Ayarlarla Yeniden Eğitme)

Fine-tuning, iş verilerini bir modele gömmenin bir yoludur. Bu yöntemde, önceden eğitilmiş bir model alınır ve belirli bir iş verisi seti ile yeniden eğitilir. Daha basit anlatımla, AI’a cevap vermeden önce “ders aldırmak” gibidir. Yani işletmenin bilgisini AI’a öğretmek diyebiliriz “Fine-tuning” için…

Bu yöntem RAG kadar basit değil diye düşünüyorum. Eğitme süreciyle ilgili daha derin bilgi gerektirir. Ayrıca, etkili sonuçlar almak için kullanılan önceden eğitilmiş modelin kaliteli ve konuyla alakalı olması gerekir. Ancak tahmin edebileceğiniz gibi, bu her zaman kolay veya mantıklı değil. Örneğin sağlık sigortası alanında çalışıyorsanız, sağlık odaklı bir ön model daha faydalı olur. Mevcut bazı modeller (Llama, Phi gibi) kullanılabilir, ancak bu durumda işletmenizin verisi yeterince iyi değilse, sonuçlar da o kadar etkili olmayabilir. Çünkü bu modellerin çoğu sınırlı ve genel veriyle eğitilmiştir.

Diyelim ki elinizde kaliteli bir ön model var, yine de eğitme süreci hakkında iyi bilgi sahibi olmak gerekir. Doğru parametrelerle eğitim yapmak ve testler gerçekleştirmek zaman alır. Eğitim süreci haftalar sürebilir. Eğer veriler sık sık değişiyorsa, bu yöntem güncel sonuçlar sunmakta zorlanabilir. Ayrıca bu süreçler ciddi bir işlem gücü gerektirir, yani “maliyet” demek.

Fine-tuning’i kötü bir yöntem gibi göstermek istemem. Ancak bence mevcut LLM’lerle, işletmelerin işe başlamak için kullanacağı ilk strateji olmamalı. Tabii eğer işletme veri mühendisliği ve makine öğrenimi konusunda olgunlaşmışsa, kendi modelini oluşturup spesifik verilerle yeniden eğiterek daha etkili olabilir.

Gelecekte, Dünya Sağlık Örgütü (WHO), Unicef, Greenpeace, AB, Unesco gibi büyük küresel organizasyonların belirli alanlara özel LLM’ler yayınlayabileceklerine inanıyorum. Tıpkı rapor yayınlar gibi… Bu da işletmelerin fine-tuning yapmasını daha kolay hale getirebilir.

Sonuç

“RAG”, AI’ın önce bir Google araması yapması; “Fine-tuning”, gitarın tellerini çalmadan önce akort etmek gibi diyebiliriz.

Her iki stratejinin de avantajları ve dezavantajları var. Ve bu avantaj/dezavantajlar tamamen ihtiyaçlara ve işletmenin olgunluk düzeyine göre değişiyor. Bana sorarsanız, şu an için iş değeri açısından etkili ve verimli olması nedeniyle RAG stratejisine oy veririm.

Aşağıda düşüncelerimi tablo halinde özetledim:

StratejiRAGFine-tuning
Temel KavramAI iş verisini dışarıdan çekerAI, iş verisiyle yeniden eğitilir
Daha uygun olduğu durumSık değişen verilerStatik ve uzmanlık içeren veriler
MaliyetDüşükYüksek
LLM Uzmanlığı Gerekir mi?HayırEvet

Görüşleriniz, deneyimleriniz ya da sorularınız varsa aşağıya yorum bırakabilirsiniz. Başka bir yazıda görüşmek üzere…👋🏼

Every year one of my friends (@Muhammed Hilmi Koca) in developer community in Turkey curates some insights about software development trends. Lots of skilled and experienced colleagues shares their ideas and thoughts. And thanks to my friend, this year he also shared some spot for me to share some of my ideas.

Even if the original full text is in Turkish, I really suggest you to try to check. There are really good insights.

I also wanted to share mine as translated in my blog. So here are some thoughts about upcoming software development trends…

I believe that 2024 will be a year where artificial intelligence is scrutinized in the software world, and it will begin to be utilized more efficiently. Over the past 3-4 years, we have endeavored to understand the advancements in artificial intelligence methods and tools with a bit of amusement. We laughed and enjoyed ourselves by creating visuals and texts with “Generative AI” solutions… We also experienced that artificial intelligence tools are able to generate code at a very proficient level. With 2024, I anticipate that artificial intelligence tools will become a more significant player in software processes.

I anticipate that processes will strive to become more efficient by starting to prefer or incorporate artificial intelligence in “code review” processes or “re-factoring” requirements. There are already companies attempting to integrate tools like GitHub Copilot into their code integration processes…

As these steps begin to materialize, in the medium term, companies may start integrating language models tailored to their own code inventories into their code development processes. I believe that resolving a business requirement through artificial intelligence will enable companies or organizations to address their needs while maintaining their own standards.

Programming Languages

I believe there is now a more informed approach to programming languages or platforms, with an awareness of their advantages and use cases. We no longer defend a language or platform to the death as we used to. Right? Or do we still defend them? 🤦🏻‍♂️Oh, I hope not… I also believe that programming languages have reached a certain level of maturity. Innovations are now introduced regularly according to real needs. Consequently, I believe that the most suitable solution can now be chosen according to the specific requirements. However, I sense that the Rust language, which has gained more attention in the last 2-3 years, will stand out more this year in terms of needs, performance, and resource usage. (Note to self: Finish reading the O’Reilly Programming Rust book already)

No Code

The inclusion of artificial intelligence in the game will draw a little more attention to “no code” tools. Although as programmers we may still not find them very reliable, the involvement of individuals who do not know programming in the game with “no code” tools will not only begin this year but also lead to other developments in the medium term.

Cloud

“Cloud Platforms” have now become the “default”. The ease of access to platforms and the maturity of services will maintain the position of cloud platforms in the software world. However, I think the “cloud exit strategy”, which is always on the agenda due to costs or global conditions, may gain a little more importance. Also because of some business sustainability requirements and regulations having or being able to have some alternatives will be important topic. “Cloud-native” solutions to eliminate platform dependency will still be a hot topic in 2024 and should be…

We’ve been talking about IoT for a long time, but honestly, nothing has turned out the way I imagined. The Covid period affected the integration of 5G to some extent and consequently slowed down the interaction opportunities between devices, I think. However, the increasing popularity of VR and AR glasses(again) will underscore the interaction between the virtual world and the real world, and so IoT solutions will start to become more visible.

Prompt Engineering

I started with artificial intelligence in the post, and I’m ending with artificial intelligence. With 2024, I believe that “prompt engineering” competency will begin to gain importance for programmers. Artificial intelligence will not replace the jobs of programmers, but I think programmers who can interact more easily and consciously with artificial intelligence will be one step ahead. I anticipate that this awareness will gradually be sought in job advertisements because questions asked to AI tools or expressions shared with them affect the quality of the expected results. Perhaps it’s a bit utopian, but I believe we’ll see it happen…In 2-3 years having a good skill to know about prompt engineering for AI tools is going to be important as knowing a software development principle.

Until we meet again, happy coding…