Comment Two questions after reading the article: (Score 1) 74
Under which license did MS release WSL?
They already have the "most repeated" graphs. I guess they will just place the ads at the peaks.
I notice that the "most repeated" mark is often at the end of in-video sponsor blocks, my hypothesis is that the YouTube algorithm detects where people click when they skip those.
Or “write me a thesis of the current thought around Theory T by summarizing all the info you learned from Published Theses X, Y and Z, because I don’t want to buy a copy.” Which is how science works.
Nope, that is how scholasticism works, not how science works.
Cogito ergo sum doesn't do it for you?
With respect to a soul? No, not at all.
(BTW, I do have a masters & PhD in philosophy [specifically logic & philosophy of science], I have read/studied Descartes' "Meditationes" & "Discourse on Method", as well as Aristotle's "Psyche / On the Soul" & d'Aquino's "De Anime".)
Information itself has no weight.
It has a negative weight: the more information you put on a punch card, the lighter it becomes.
You're right - it's a cert that future (mainstream) programming languages will be optimised for AIs rather than people - cos it's another (very effective) way to dis-intermediate human devs out of the process.
I for one am looking forward to the return of Hexcode as a mainstream programming language.
(For giggles, I just googled Hexcode and ALL results on the front page were about colour codes in web pages... anybody else around here still reads "3F" as "SoftWare Interrupt" or "A6" as "LoaD accumulator A indeXed"?)
There is not a single credible AI expert who thinks that scaling by itself leads to AGI or even further advances in AI. This is a strawman. Are there really 24 percent that thinks that scaling by itself leads to AGI? Wow, that's really hard to believe.
From this article, there seem to be many who indeed believe that:
“Over the past year or two, what used to be called ‘short timelines’ (thinking that A.G.I. would probably be built this decade) has become a near-consensus,” Miles Brundage, an independent A.I. policy researcher who left OpenAI last year, told me recently.
The early success of LLMs surprised its developers and excited investors
Hmmm, haven't we seen this before?
The early success of computer translations and perceptrons surprised its developers and excited investors in the 1950s and 1960s.
Then we got the first AI winter of the 1970s.
The early success of expert systems surprised its developers and excited investors in the 1980s.
Then we got the second AI winter of 1990-2010.
It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.
In that case, wouldn't the person who hard coded this response not better make it say "to continue, buy the full version" instead of "I do not want to make your homework because you should learn how to do it yourself"?
Heavier than air flying machines are impossible. -- Lord Kelvin, President, Royal Society, c. 1895