aboutsummaryrefslogtreecommitdiff
path: root/news/1759407714-why_you_might_need_ai_less_than_you_think
blob: 627644696d6e978f6dbb2cdac31e3cf297bf6de9 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
Why you might need AI less than you think
=========================================
LLMs (aka AI) are models that have been trained to do human-like conversation,
mimicking effectively (or not) the lingual parts of the human brain, the parts
that are responsible for making sentences, paragraphs and expression in native
(human) languages.

Many are using it for coding. I strongly believe that this is a wrong usage for
the tool. As it is trained for writing and reading text, the job of "writing
code" is inherently different by nature.

When we want to code some project from an idea or from reading some objectives,
we go in a process where we need to understand the objectives and change them in
a way that computers understand by writing a code base on some programming
language. The programming languages though are not governed by the same rules
and logic of the human-native ones. Things in human language might not make any
sense for computers.

There is a wrong assumption that since we are telling a computer to do it, it
will be natural for them to do the task easier. That's a common internal
misconception. As already said, this programs/models are trained to mimic human
language, not computer languages. They don't operate as when we say to someone
"how to say 'my name is ...' in your mother language". They don't have one.

What an LLM will do (in short) is read your text and try to express your text
as code. The outcome is more close to human text than real programming
algorithms or computer native programming language utilization. In other words,
it's good until it's not. This can be witnessed far more easily when one is
prooompting for lower level computer languages. It can also be seen many times
in high level ones.

For example, providing a function written with bad naming conventions (both
function name and variables) in a type-safe computer language like golang,
including some comment about its usage and asking to rename the function and the
variable names only, might result on having a reply that changes also the types
of the variables, not renaming everything or just hallucinate from a small and
concrete prompt. For this example, I had to ask again (around 5-6 times) until
it made it right. That's what I call "losing time with garbage tools".

I have countless personal examples of using LLMs in ways which resulted me to
waste more time than if I did it by myself. I have lots of examples read from
articles that conclude to the same time wasting and go in extend analyzing
hallucinations or even bug discovery for bugs that aren't there or they are
there but only a small amount of percentage are figuring them out.

Note that the claim here is not that "I am better than AI" or an inferiority
complex. For short, anyone with intuition is better than AI and intuition is
installed by default in every human, so there's that: we are all better than AI.

The claim is the following: in order to get things going, people are choosing
convinience over factors that don't really understand. This creates a kind of
debt, known as knowledge debt. If you are doing it for your own sake and nobody
else will ever see it, then yes, you could go nuts on copy-pasting. And to be
real about it, I did a lot of copy-pasting in my early years (pre-LLM era). It's
not inherently bad. But there are more steps on this: you copy-paste, you try,
it might not work as intented, you edit it, retry, edit again, done.

On collaborative projects though, this is very different. One might want to just
complete a project, rushing to a final solution without any critisism or thought
on what it should be or should not be there. No understanding of architecture,
no will to change anything on the code if it's working, leaving codebases in a
huge mess, really badly written, with lots of repetition and not at all simple.

To me, this means that for the sake of not putting the work, you put almost the
same amount of work, get into knowledge debt, pass it on to your colleagues,
provide badly written code and get the credits for being "fast". Before going on
about what comes with this approach, let's quickly discredit the "fastness".
They are not "fast". "Vibe-coding" is totally unrelated to coding and more
related to "testing". This approach has severe drawbacks which one might think
they will never show up but they are just waiting around the corner.

People that are about to work with such "testers", while trying to grasp the
concepts of good practices, reading such code might end up having a really bad
time while doing so. If the architecture of the whole project is just bad, this
alone adds up time. Repetition requires deduplication which takes time. People
that don't want to put the time will lose interest. People that don't want to
refactor badly written code will lose interest. And that's problematic.

Your future (or current) manager might not even know how to read code. Having a
manager used to quick project deliverance is not something really bad. But will
turn badly when for smaller features you will need more time than the first code
base was written on. This will be witnessed by managers.

Quits, firings, bad reputation, bad relations: hostile work environment for
short. Will LLMs help when one reaches this level? I don't think so. Learn your
craft! You can do it!

There are tools (yeah, AI ones) that specialize on coding, but they tend to come
with costs or limitations. If these tools are the only devs you know available
for hiring, what can I say, go nuts. But don't forget(!!!!): you pay someone
else now, which is bad. You probably work a 9-5 to make a living. The company
you work at is not yours. You don't do hiring, your manager does. Ask them to
hire people and set the standards. The money you are making is for you to keep,
not to buy stuff for a company you don't own. Hello!!!

We are still using software written in the 1970's. From back then until pre-LLM
era software is written completely by humans. That's more than 50 years. The
hype promoting AI is creating a mindfield crisis to some that lack of
understanding can enhance its effects. The manager we mentioned before, might
have no idea how to write/read code. Seeing LLMs spitting all this output looks
nice to them, but it's not realistic. To them, it looks productive, to devs
sooner or later will be counter-productive.

So why you might need LLMs less than you probably think? Because you possibly
started recently to code and the learning curve is a learning curve and it's
natural to get overwhelmed, bored, lazy or just want to see some results. Learn
your tools instead, your editor, the compiler you are using, get in depth or at
least reach a level of understanding. Write code that you know why you wrote it.
That's "owning the code".

This article was inspired after a lot of discussions, personal experiences and
articles. Unfortunately, I won't be referencing the articles. Truly, though, it
is my intention to raise awareness about my humble opinion which I feel that
while it's seemingly unpopular, it might express statements that others might
also agree. The point, however, is mostly for people that might haven't thought
about this before and possibly dealt with the issues mentioned. A lot of these
can be reasons for causing frustration, and when this emotion comes up, people
tend to not explain the reasons and just leave, stop talking, break
collaborations or other ways of avoiding confrontation.

While the following could be a heads up I feel that it matches much better for
closing thoughts and clarity. I personally stand against this LLM/AI hype. I
find it stupid, extorting and very disturbing. I don't like big corporations
either. My understanding is that these organizations are trying to monopolize
once again various sectors of human-driven workforce so they can gain more for
theirselves. I find it plain stupid to waste time to use those tools and in the
process of doing so, train them to do it better. Therefore I personally
discourage anyone from using them. If you do decide to use them, my advice is
to make one simple prompt at the time and never engage with them after that.
Don't train your competitors for free. You are being used. Any governmental
regulations leave me indifferent and utopias that it will transform society in
a way that would be beneficial for everyone are lacking understanding of how
governments and capitalism works.

Finally, as I really dislike the "conclusion" part on every article I am reading
on the internet, you are encouraged to draw your own for yourself. If you used
some LLM to summarize this article, I assume that you can't gain anything from
this article because reading it, requires time and work which you seem to not be
willing to put on anything. Value comes from work you put on stuff, if you don't
they are just cheaper but this doesn't guarantee any type of quality. Maybe
harsh, but I honestly can't care more.

If you really read it as it is, I then thank you for your time and effort. I
hope you will find ways to include it in your thinking and internal processes.
In the case you disagree with what I wrote, firstly we might have different
purposes but secondly, I hope it adds up to your omni-opinion development.

This took long enough to write. I won't make a series of articles about this as
engagement with current hypes is not my lifestyle, so don't expect follow ups.

Again, thank you for your time,
kaotisk