I was chatting (real chat!) to some robotics/gaming engineers last night, who use AI (ChatGPT, free version, although if they started charging they’d have to pay ‘cos it is now essential) constantly for coding and debugging. And the other week a friend who works for a big travel firm who also finds AI brilliant for writing business and strategy reports. So why the need for such interactive experiences leading to suicides? It emphasises what I said last week about Small Language Models vs LLMs. No model used for coding needs a model with Shakespeare’s sonnets, theories on the use of Stonehenge, CO2 emissions of undersea volcanoes or the best way to kill yourself, or others. Actually, if you’re into robotics or gaming code, you definitely should NOT be using a Model with access to that sort of data – killing people, that is, code written in the style of a Shakespeare sonnet could be a wonderful thing to behold …. Anyway, for coding, SLMs, which use a thousandth of the energy, and no water, would be far more appropriate for a productivity enhancing purpose than LLMs, with a much reduced risk.
No Comments
No comments yet.
RSS feed for comments on this post.
Sorry, the comment form is closed at this time.
