3 min read

Developers, Embrace the Future

Developers, Embrace the Future
Photo by Gerard Siderius / Unsplash

Guess what: Let's talk about AI.

"Oh no, another person talking about A.I, I CAN'T TAKE IT ANYMORE"

Last week I gave an introductory presentation in a communication and public speaking class of a course I'm part of, and the topic of my presentation was: "Beyond AI: Navigating the Future of Work".

In preparing for this presentation, I had to delve a bit into the AI world in order to understand and convey the main idea to an audience that wasn't from the IT field.

Within this presentation, I started with a controversial statement: "The majority of your jobs may not exist or could be radically changed within 10 to 15 years" and, indeed, that's what I believe.

Will Developers Become Unnecessary?

The obvious answer to this question is NO (at least not for now). After all, before we reach AGI, which is a source of so much fear, who's going to be in charge of writing all its code? :)

I confess that I would really like to continue doing what I do the way I do it for the next 10 years, and I even believe it's possible, but the cost of that, in my opinion, is missing out on relevant opportunities in my field.

I don't want to be alarmist, and I want to come back here in 5 to 10 years to this exact post and say I was wrong, so that a large majority of my colleagues (and even I) won't have had to adapt their way of working.

But what I truly believe is that these new tools are changing, and have already changed, the way we work. Tasks often seen as tedious, like writing documentation, refactoring a function, code autocomplete, are already a reality through tools like GitHub Copilot, and, in my experience, work very well as complementary tools. For example, using Copilot, I can ask it to help me write a function just by describing what I'd like to do, I can ask it to write test cases for the code I just wrote, write detailed documentation about a newly created module. And from interactions with me and the code I work on, it increasingly understands the context and provides better suggestions.

I believe the change will be gradual; first, we'll integrate these tools more and more into our daily lives (as we are already doing with the use of ChatGPT and Copilot), these intelligences will become increasingly robust and "intelligent" (ba-dum-tss) to the point of being able to predict our next coding moves with some accuracy.

Soon after, I believe we'll increasingly start using speech tools and communicate with our IDEs in a spoken manner so that we'll have to "write code" less and less.

After that, it's all a mystery, after all, we still don't have the ability to predict the future, do we?

Critical Perspective

When we think about the rapid growth of AI and its enormous promise, we should also consider some ethical, social, and moral challenges. Especially in a society distinguished by significant economic and social inequality, the development of AI may raise several critical concerns:

  • Universal Basic Income: The evolution of AI will help us automate innumerable repetitive tasks, which will probably make some jobs irrelevant, and some people might not even be able to adapt to this change. The universal basic income may protect employees who will become obsolete work-wise from economic deprivations.
  • Unconscious bias: The AI, no matter how advanced it is, is still based on data that might have been created by us and that contain human bias. There's a danger that we should consider, which is the spreading of unconscious bias to vital decisions for human life.
  • Domination of AGIs in large corporations: The monopolization of AGI’s power among several corporations could create an imbalance of power. This could be bad for competition, privacy, and even democracy.

These challenges lead us to question: "Is regulation an obstacle to innovation?". In my view, a balanced approach to regulation could be necessary. Although I don't have a strong opinion on it yet, it's crucial to establish guidelines that ensure ethical and responsible AI development, preventing exploitation and mitigating potential risks.

However, these regulations should be flexible enough not to stifle innovation. The goal should be to find a middle ground that protects society without inhibiting technological progress.

Ultimately, the debate about AI isn't just about technology; it's about shaping the kind of society we want to live in.

Conclusions

So, if it serves as an appeal, my advice is: "developers, embrace the future".

After all, what authority do I have to say this? If you asked that question, I'm sorry to disappoint you, but the answer is: None.

I'm just a technology enthusiast who believes in the power of study and personal evolution. And, if it serves as advice, seek information about, experiment with these technologies, keep an open mind to innovation, and never, ever stop learning, because our field evolves so rapidly and yes, we need to keep up.

Happy coding.

References

  1. AI Index Report - Stanford University.
  2. The Future of Jobs Report - WE Forum.
  3. State of AI Report - Ian Hogarth and Nathan Benaich.
  4. Artificial Intelligence and the Future of Humans - Pew Research Center.