[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / siberia / edu / hobby / tech / games / anime / music / draw / AKM ] [ meta / roulette ] [ cytube / wiki / git ] [ GET / ref / marx / booru / zine ]

/tech/ - Technology

"Technology reveals the active relation of man to nature" - Karl Marx
Name
Options
Subject
Comment
Flag
File
Embed
Password (For file deletion.)

Join our Matrix Chat <=> IRC: #leftypol on Rizon


File: 1710056895331.png (1.51 MB, 1024x1024, mikugiant.png)

 No.23640

hi guys long time lurker but haven't posted until now.
what are your stances on local language models (LLMs)?
I know that closed models like GPT-4 are fucking stupid and late stage capitalism.
But what about everyone having infinite knowledge at their fingertips? is this not a noble goal?
What is the party line on AI and the proletariat?

 No.23641

Shut the fuck up tin-can

 No.23642

>>23641
We're gonna get trounced by porky with that attitude anon..

 No.23643

>>23642
*magnetic sounds*
Stay away from me fagtron

 No.23644

>>23643
Anon if you're seriously using asterisk to roleplay with me on a socialist basketweaving forum right now you're three quarters of the way to local language models anyway. Work with me here - what's the apprehension on something that you can leverage for free?

 No.23645

>>23644
I'm just shitposting m8
I don't give a shit. I'm more like unimpressed by what I've seen of "AI"

 No.23646

File: 1710057714639.png (1.22 MB, 848x1200, 1709400505740674.png)

>>23645
Honestly all of that bing and chatgpt bullshit is slop. And I don't care for others having my logs.
The real magic is running a language model right on your computer.
These days it doesn't require anywhere near the hardware it did before either.

 No.23647


Behind AI is code; as long as its owner inputs a line of code, it will obey more than any human. If it follows the masses, it will never be corrupt. However, if its owner behind is a capitalist or a corrupt bureaucrat, it will obediently serve them, more cold-blooded than anything else. Having an AI that belongs to the people would be of great help, but the problem is that we need to have, need to establish a 'it.' It is also preferable that it can be locally deployed by anyone, spread quickly, thus avoiding political pressure being imposed on its sole owner.

ai的背后是代码,只要它的主人输入一行代码他就会比任何人类都服从,如果让它听从大众那么它绝不会贪腐,但如果它背后的主人是资本家或者贪腐的官僚,那么它会比任何东西都冷血的服从。有一个属于人民自己的ai将会对人民帮助很大,但问题是我们需要有一个它,它也最好是能给任何人本地部署的,从而避免它的唯一主人被施加政治上的压力。

 No.23648

>>23647
> if its owner behind is a capitalist or a corrupt bureaucrat, it will obediently serve them
Comrade thank you for your insight. Have you had a chance to try open source local models Yi, Qwen, or Smaug yet? All three are trained on Chinese tokens. I think the teams behind these models are doing a great job of pushing technical advancements in the field without yielding to western authority.

 No.23649


 No.23650

>>23649
based comrade, showing other anons how it's done. local models are the future
(kyllene 34b is pretty good too btw highly recommend and it's based on the yi foundation model)

 No.23651

File: 1710059391938.jpg (76 KB, 1290x818, AI_grift.jpg)

>>23647
>it will obediently serve them

See this right here is the critical error I see everyone making when they wade into "AI" discussion. The alignment problem is an actual thing, and unfortunately it's been memed into unseriousness by techbros playing too many games and huffing each other's farts too long. The alignment problem isn't some Terminator/Matrix machinegod bullshit, and it's kind of dangerous that this is the public's association because it causes level-headed people to write off the whole thing. The alignment problem is really, really hard, and we haven't solved it.

The problem isn't that we haven't solved it. The problem is people don't fucking understand that we haven't solved it.

As of right now, we have no clue if an "AI" is actually trying to do what we think it is. We don't know if it's "intentions" are aligned with our intentions. Remember: disabuse yourself of the notion that this is about a machinegod. It's so much more banal and mundane than that. The alignment problem is why they haven't figured out self-driving cars despite decades and billions - we can't determine whether the algorithm has actually "understood" what a stop-sign is, or if it's just reacting to red octagons with text in them.

This is why LLMs spout convincing lies all the time. Their "intentions" are to produce text it thinks will satisfy the user - *NOT* to produce text that's factually correct. If the user asks for factually correct text, it's just as well if it produces content the user *thinks* is factually correct. Despite being a trillion dollar question, no one can figure out how to actually make it actually align with the user's intentions.

The alignment problem is why, no matter how hard these mega-corps try, they can't get their algos to stop outputing banned content or leaking the system prompt. You see this all the time: when people proclaim that they figured out how to get Midjourney to generate Nazi propaganda or whatever, what they're actually saying is that the developers failed to impose their intended restrictions because of the alignment problem.

One recent case that comes to mind is when the AirCanada customer service chatbot promised someone a refund if he travelled for funeral reasons, only for him to find out this wasn't a real policy when he actually acted through. He took them to court and won, and AirCanada had to reimburse him according to whatever deal their stupid chatbot offered him. And you know what? Some dipshit CEO is probably hounding his techmonkies about how to make sure the chatbot follows company protocols or whatever, not understanding that this basically means solving the alignment problem *and we still don't fucking know how*.

The alignment problem isn't really a big deal. Who gives a shit? Again, disabuse yourself of the machinegod meme. The real issue lies in people being tricked into over-estimating these algos. The real issue is that "AI" is juuuust impressive enough to convince everyone it can replace jobs, but not actually impressive enough to do it.

 No.23652

>>23651 Technology still needs to evolve; firearms were not as advanced as bows and arrows initially. Just think about the initial face scanning seen only in movies. Now, even the front door of my house has facial recognition installed. (I still miss keys; the absence of the sound of keys and the surprise of finding my pockets empty nowadays can be startling.)
技术仍然需要发展,火器最初也是不如弓箭,想想最初的人脸扫描还是电影里的东西,现在我家里的大门都装了人脸扫描(我还是怀念钥匙,没有钥匙的声音以及现在摸口袋发现是空的都吓一跳)

 No.23653

File: 1710060905884.png (493.29 KB, 869x599, deep_neural_trolley.png)

>>23652
Ah but you see, the alignment problem is a philosophical issue, not a technological one.

Note: what follows is my own thoughts and not commonly accepted among AI researchers (which I'm not, but am friends with a few)

It's an incredibly interesting question, and we've been secretly grappling with it for maybe millennia. I think the alignment problem is universal to complex systems.

It's easiest to explain what I mean with an example:

In my city, they recently passed a law banning plastic grocery bags, to reduce waste. Fascinatingly, literally the day the ban went into effect, the grocery store next to my house starting putting people's stuff in these shitty cheap fabric bags. The bags are made of nylon. This practice got pretty widespread, and it seems like this law had the exact *opposite* intended effect. Whoever wrote that law had an obvious intention in mind, but the system interpreted it differently.

I think legislators are eternally grappling with alignment problems. They have specific intentions for how their laws affect things, and often times these intentions don't get translated correctly to society.

Here's another example of an alignment problem: animal training. Think about how hard it is to train a guard dog to only bark at suspicious people. Probably the best you could do is have it bark at strangers, but this isn't your intention - "obviously" the mailman is fine, but it's so hard to get a dog to understand that because we can't articulate to it

With conventionally written programs, if there's a bug, the programmer can (in theory) determine the exact reason for the error. It really is like a perfect machine, like a clock.

That's not the case for "AI". When a Tesla slams full-speed into a wall, it's really hard for the programmers to figure out what the machine was "thinking", so how could they even fix the error?

The alignment problem faced in organizing our economies is probably unsolveable, and it really is an issue. But the alignment problem for technology has an obvious solution: don't trust these incomprehensible algorithms to do anything important. Unfortunately, the general public does not understand this

 No.23654

>>23652
do you happen to post on /lmg/ as well? just from your writing style I have a feeling you might

 No.23655

>>23651
good post, can someone please screencap? i would but on mobile unfortunately

 No.23656

Progressive as all automation is

 No.23657

>>23653

>Let's go back to the problem of communication. The problem of communication is the fact that there's a disconnect between the inputs and the outputs. The input being all of our intentions, all the things we use to formulate meaning, the output being how that meaning is interpreted by society or by other people in general. The fact that it seems like we don't share any reality at the level of bilateral communication between two parties, that's the problem of communication. So what if I told you the problem of communication is the same thing as the problem of Communism? Now the Austrians talked about the calculation problem, but that's just another phrase for the problem of communication. Instead of a problem of calculation, it's actually a problem of communication. A Communist society cannot prove any common sociality on the terms of the Communist Party itself. The Communist Party cannot decide how we are social or how we have a common reality, and the intentions of the socialist planners or the socialist leaders to build socialism radically differ from the actual outcome. So the problem of communication that I gave you is this - you have an intention but when you say it, it comes out differently than what you intended. The problem of Communism is kind of what right-wingers say: when you set about to the task of building it, it turns out radically different from what you intended. It's the same shit. The difference between the intention and the result. […] The problem of communication and the problem of Communism are the same exact thing. What Deng Xiaoping's Reform and Opening Up accomplishes first and foremost is an acknowledgement that the pace of socialist construction escapes the purview of socialist consciousness, the pace of socialist construction is irreducible to socialist consciousness. Socialist consciousness is actually derivative and secondary with regard to the material scale of time at which socialist construction occurs. Within the linear temporality of modern time - modern temporality - there is an actual connection between consciousness and the scale at which time proceeds. This takes the form of the Big Ben clock in London, the mechanical clock. The extent of our perception of time is synonymous with the concept we have of time that exists in our consciousness. We measure time according to the solar revolutions and we divide these down to seconds or milliseconds or however we want to, almost like a digital clock, and it's just moving, and it's moving at the exact same pace as our consciousness of it. What if I raised the idea that time moves at a scale that is not continuous with our consciousness of it? Then we are returning to premodern time or postmodern time, depending on how you want to look at it. But this is a form of temporality that is very much implicit in modern socialist China. In modern socialist China the pace of socialist construction is happening at a level that is not reducible to the socialist consciousness that exists. Now what does that mean? It means that within the process of the development of socialism all sorts of immoral, unjust, unacceptable things are happening at the level of our individual conscious experience. For example if you want to think about this outside of the context of China, think about it in terms of multi-polarity. […] You want to think about this from the perspective of multi-polarity, think about it in terms of the arms deal between China, Russia, and Saudi Arabia - why are we supporting that? "Look at all the injustice Saudi Arabia inflicts" - we know, I'm a Shia I know that Saudi Arabia executes Shia, we all know about that. Yet I'm praising this. "Oh my god how could China have diplomatic relations with the U.S. in the Cold War after all the things the U.S. did" - China's complicit according to the leftists. We're all complicit though. And there's an event horizon of extinction beyond which the modern subject does not possess any faculty of perception that establishes a fundamental difference between our experience and intervention into reality at the level of consciousness and the way in which reality is actually developing at the material and social level. […] If you can understand this difference you can understand anything about Infrared. There's a difference between the development of reality and our experience of reality and our intervention into reality at the level of our rational consciousness. Our intervention into reality is based on will, our expression of will is an extension of our rational consciousness. You make choices based on your consciousness. Our experience of reality is filtered. The way we judge reality is based on our rational consciousness. All these things are individual: they happen at the level of a sovereign, individual subject. They happen at a conscious level and we experience them in an immediate way. For example take the way in which most Communists interpret the meaning of 'praxis': "I want to change the world and build socialism, so I'm gonna go run out on the street and 'do things' because I need to experience the construction of socialism in a way that is immediate and co-temporal with my experience and intervention in reality as an individual person." But as I have just demonstrated, or at least alluded to, there's a problem of communication and a problem of Communism in which our intentions at the level of our faculties of reason, which we give expression to consciously, radically differ from their outcomes and their actual consciousness when they are given expression in reality. Theory and praxis differ. When you implement theory in reality it turns out to be something radically different than your theory - that's what Marx made very clear as a materialist. So what does this mean at the level of socialist construction? When we build socialism are we doing so according to a conscious rational plan, or even an idea in which we build a society according to what we think is moral, just, ideologically correct, etc? Or does the temporal scale of socialist construction and the development of socialist construction happen in a way that is radically discontinuous with our conscious rational experiences? According to Deng Xiaoping, it's the latter which is the case. Why is it still socialism then? Why isn't the outcome something radically different from the original intention of socialist construction? Why even still call it socialism if the outcome is so radically different? Because even though the development of socialism in China gave rise to an inadvertent consequence, which was not created by Deng and his policies, by the way, the consequence being this re-emergence of some kind of socialist commodity form in the case of Stalin's building of socialism or in the case of China the way in which the people's commune system gave rise to a new form of exchange value and some new form of the value form. And when Left Communists said that, in some sense they were right - there was a new re-emergence of the value form both in the Soviet Union and China, which seems like an inadvertent consequence of socialist construction. But it's not inadvertent. From a retrospective perspective. But if you perceive socialist construction to be reducible to its development at an individual conscious level, then yes it is. But if you recognize the pace of socialist construction to be beyond individual consciousness and rational consciousness you start to treat the socialist mode of production as a material thing, an objective material thing. If it's an objective material thing it's like a hyperobject. If you're on the ground and you see Godzilla or you see some kaiju, it's too fucking big to see the whole thing. When Godzilla is walking, he's too fucking big to see all of him walking at once, you're just gonna see his big-ass thighs, or his leg because he's too big for your individual experience to perceive him. So in the theoretical tradition, the school that was called Object-Oriented Ontology, that's what you would call a hyperobject: it's an object of our perception whose consistency goes beyond the bounds of our phenomenal experience. Another example of a hyperobject would be, for example, a geological development that occurs at a time scale of millions and millions of years - it's a real object it's happening, but we cannot perceive anything happening because the scale at which it develops is so radically heterogeneous with the scale of time we experience as human beings. I am just saying the development of socialism is exactly like that. And that is Deng Xiaoping's achievement - articulating this fact within Marxist-Leninism. Deng Xiaoping Thought, in part, amounts to the contribution of this acknowledgement within Marxist-Leninist theory. Socialist construction is happening at a scale imperceptible to the modern consciousness that experiences an immediate, linear development of time. Why am I bringing up linear? I'm bringing up linear time specifically because the alternative to linear time, which is hard for people to think about, concerns the fact that sometimes in material reality a thing is already there, or already exists, or is already materially real first, and then its development is something we experience at the level of individual consciousness. For example, an example of non-linear temporality would be the Terminator movies where the future already has happened and they send someone back in time to develop that future to ensure that it happens, or maybe even ensure that it happens in a different way. Also the phenomena of retroaction in quantum mechanics and retrocausality, which I've talked about before, in quantum mechanics - these are all examples of temporality that are non-linear. And non-linear temporality is relevant here specifically because linear time within modernity also corresponds to the immediate experience of time corresponding to time itself. Time only flows in one direction and that direction just so happens to be one that is immediate, that is co-temporal with the flow of our experience. So the same flow of experience we have, we measure that in terms of time. But the flow of our experience is not the only reality. Our experience of reality and reality are not exactly the same. It's part of reality, sure, but there's a discontinuity between them, obviously there is. If there wasn't we would not be able to think of the sun, for example, because of course the sun is imperceptible as an object if we reduce it to the frame of reference which is our experience. We couldn't think about things like the speed of light which obviously could not be experienced at that level, etc. We wouldn't be able to engage in modern physics with all of its complicated mathematical formulae, which clearly refer to something non-empirical - when I say something non-empirical I just mean impossible to observe, not only is it not a result of observation: you could not observe this, it's just beyond the bounds of human experience. It's somewhere in the real that can only be translated into the abstract terms of mathematics. So this is what I mean by outside of linear temporality. But modernity involves one tyranny of time - that is linear time. Linear, immediate, individual time. The time on your stopwatch. If there is a scale of temporal development occurring at a level heterogeneous with the scale of temporal development that we can experience as human beings, we're in for a wild, wild ride, aren't we? That means Lovecraft is correct: there are Old Ones exerting their influence on the world of humanity from without. So what does it mean to acknowledge this at the level of socialist consciousness, and again why still call it socialism? Deng Xiaoping is not a revisionist but is merely elaborating Marxism-Leninism to its conclusion. Socialism conceived as a mode of production that reflects the reality of the socius, that the socius exerts its significance as the driving force of the mode of production - according to scientific socialism that is something materially real. That is a reality that is not created voluntarily, but which is itself a consequence of the development of the capitalist mode of production - meaning it's a materially real reality it just has to be acknowledged. According to a consciousness based not in morality, not in a utopian vision for an ideal society but in a consciousness of necessity. Socialism is a necessity for no other reason than the fact that it is a material reality. Many people think socialism is necessary to save the planet, or it's necessary to prevent some harm, or it's necessary to alleviate poverty, or it's necessary for some other reason. That means it's not necessary at all. That means you are choosing socialism to fulfill a different necessity: a moral one or a rational one. But that's not scientific socialism according to Marx and Engels. Scientific socialism means you perceive the development of socialism in reality itself, and you only recognize its form in consciousness. What was Mao doing, and what were the Soviets doing when they implemented socialism, state socialism? What were they actually doing? Were they building something from scratch? No, they were not. So you see, socialism had already happened when the Soviets and the Chinese were building it. How did it already happen? The specific understanding of the economy as something that possesses significance at the political level, at the discursive level, that is something that had been made real in material reality itself. The space with which this continent of conscious intervention could be exhumed and explored already happened before. It became possible to build a socialist economy with state socialist institutions only because the economy itself became real at a different level. Because the economy itself, the real material economy, was pushed to a scale outside the device of modern consciousness. That means all of the aspects of state socialism that existed in the Soviet Union, and in China, and other Communist states - that was not the actual economy proper, that was an aspect of modern state political consciousness or political intervention that had already been opened up according to a more fundamental change at the economic level. State socialism was just an epicycle of some other economic material development that could not proceed according to a plan. The economy is always something you cannot plan for. Planning as Engels described it doesn't refer to a determination of the economy as a result or extension of political will, but more like a steering, almost a cybernetic conception. A science of direction, something that is already moving, already going in a certain direction and like the great helmsman you're just steering it. That is more what planning is referring to, rather than creating from scratch. So state socialist planning in the Soviet Union and in China did not actually encompass the whole of the economy. The state planning was a moment within the actual, material economy that had developed, the economy is always material that's why it's called an Economy. If it wasn't material, that is discontinuous and non-transparent, the device of modern rational consciousness - it would not be an Economy. There would be no divided oikos to begin with, there'd be no need for there to be an economy because it wouldn't be discontinuous from politics to begin with. Economy proper refers to a science and law of humanity's interaction with nature and distribution of the products of that interaction, production and distribution. A science of this that proceeds according to its own laws and its own development. It's not an extension of politics, it's actually a reality more primary than politics is.

>>>>>>
>>
>>
>>
>>

 No.23658

File: 1710068521387-1.jpg (297.88 KB, 1000x637, 2009-Seaworld-Shamu.jpg)

>>23653 Um, well, at first, I didn't know what the term "alignment problem" meant. I'm not someone from the English-speaking internet environment.

In your theory, AI is like a performing killer whale. Most people find the killer whale's performance fascinating and consider it a friend of humans. However, many others believe that the "killer whale's performance" is not the killer whale's true intention. Throughout its training, it merely mechanically imitates actions like leaping out of the water——getting fish——staying underwater——not getting fish. The killer whale doesn't see humans as friends; it might even not know what it's doing. It's just that the trainer doing so makes humans "feel like the killer whale is their friend." The killer whale's values are different from human values. It won't be a human friend. When it can't get fish, it might even get enraged and kill the trainer. If humans can't get their wages, they might sue their boss. When there's a subconscious thought of "killing the boss," their human values would prevent them from doing so. The killer whale doesn't have this. It doesn't know what the weird-shaped fish outside the water that can produce fish is. Preventing killer whales from killing people is a millennia-long evolutionary instinct. They see fish and seals as food, not humans. It doesn't have the consciousness of performance, unlike an actor with performance consciousness. Humans know they're performing, which undoubtedly greatly increases the difficulty of training.Imagine Hollywood filming with a group of uncontrollable psychiatric patients who are completely unaware that they are making a movie.

Purpose—Action—Result

The main issue here is whether non-human entities can fully understand and control the purpose to achieve the desired actions and results, and whether the training and maintenance costs are acceptable. One line of code can clearly state its purpose, action, and result. For example: say "hello" (purpose) — data transformation (action) — external output: hello (result). But for AI, saying "hello" is learned through prolonged training. If you want to change it to "good afternoon," you need to change the code. However, an AI trained only to say "hello" must be retrained. We find it difficult to understand AI; we can't directly modify AI like code. You can't extract the killer whale's brain and add something to make it speak to humans. We can't fully reconstruct and rewrite the killer whale's brain. The training might be simple for something with few requirements, but for something as complex as a Shakespearean book, training to output an identical book is not that simple. Even a single typo in training takes a long time.

However, they've achieved it. Through thousands of training sessions, killer whales can perform, and AI not only memorizes Shakespeare but can also generate similar works. Although its purpose differs from an actor — an actor's purpose is to perform, while a killer whale's is to execute strange actions to get fish. Imagine someone saying during a time when killer whales weren't performing for humans: "I can make those big black fish that eat seals every day over there perform in front of you." People would mock them; killer whales and human thoughts are different. How could they perform? Do they have a roaring mother killer whale at home telling them to go find a job? In fact, killer whales don't have the consciousness of performance, but they can "execute strange actions to get fish" — to human consciousness, this is performing — with enough training time and a species capable of understanding "executing strange actions to get fish" (something many animals can't do).

Now, with the computing power of billions of trillions of floating-point operations per second, any form of training seems achievable. Before 2022, what AI can accomplish now was almost unimaginable. "Completing tasks" and the "cost of training and maintenance" seem likely to reach an acceptable level with technological advancements.

Certainly, your theory is worth discussing. I don't have conclusive evidence to prove that AI can completely overcome these issues; once upon a time, nuclear energy was also highly anticipated.

 No.23659

>>23654 No, I haven't posted. I used to express some opinions domestically, but rarely contribute to foreign websites until now, except for this one.

没有,我没有发过,我曾经在国内发过一些言论,不过直到现在很少在国外网站发言,除了这个。

 No.23660

>>1788376
https://leftypol.org/leftypol/res/1788376.html#1788376

Take it to the AI General, we shouldn't waste a good thread.

Always make sure to search before you make a new posts ffs

 No.23661

File: 1710070810652.jpg (27.45 KB, 456x456, xj9.jpg)

>AI

 No.23662

>>23658
I need to go sleep, I will respond later, but:
https://en.wikipedia.org/wiki/AI_alignment#Alignment_problem?wprov=sfla1

It's not my theory, this is an established topic. It would be significant even if someone only proved whether a solution exists

 No.23663

>>23653
>In my city, they recently passed a law banning plastic grocery bags, to reduce waste. Fascinatingly, literally the day the ban went into effect, the grocery store next to my house starting putting people's stuff in these shitty cheap fabric bags. The bags are made of nylon.
They didn't ban plastic bags. They banned single-use plastic bags, the shitty cheap fabric bags can be reused.

 No.23664

>>23662

Of course, I understand. It would be better to use 'the theory you support.'

> unlike an actor with performance consciousness. Humans know they're performing, which undoubtedly greatly increases the difficulty of training.



This segment should be:


unlike an actor with performance consciousness, humans know they're performing.

At the beginning, the symbols may not have been noticed.


当然我知道,应该用”你所支持的理论“更好一点

> unlike an actor with performance consciousness. Humans know they're performing, which undoubtedly greatly increases the difficulty of training.


这一段应该是

unlike an actor with performance consciousness, humans know they're performing.

一开始的符号可能没有注意

 No.23665

>>23662 In fact, the paragraph I just wrote serves as evidence for something.

Without AI, this entire passage, if translated by a machine, would be a disaster for English native speakers.

However, I've iteratively had GPT revise this lengthy passage multiple times. It often confuses the relationships between orcas, fish, and humans.

By the way, are you also from the Eastern Hemisphere like me? It's morning in North America now.


事实上我在打的这一段就佐证了一些东西

没有ai,这一大段如果用机器翻译出来的文字对于英文母语的人简直就是灾难

但是这一大段我也是反反复复让gpt去改了好几遍,虎鲸、鱼、人的关系它常常搞错。

等等,你和我一样是东半球的人吗?北美现在是早上。

 No.23666

>>23640
>hi guys long time lurker but haven't posted until now.
Lie. AI spammer I hope your mother kicks you out the house and you kill yourself destitute.
Dirty worthless fucking American.

 No.23667

>>23666
Who pissed in your cereal

 No.23668

>>23667
This faggot by coming here to make these endless shitty ops about his faggot twitteroid ai products.

 No.23669

A neural network that can tell you outcomes based on past data?
Where could that be useful,?
Oh yes socialist planning

 No.23670

>>23640
I use them myself. They're fun.
Just don't drink the cool aid. They're reproducing machines. That's all they are good for.

 No.23671

>>23667
he thinks that everyone who makes a thread about AI is the same person.

 No.23672

File: 1710095341826.png (159.57 KB, 1324x560, AI Alignment Problem.png)

>>23651
>>23655

Here you go.

 No.23673

> But what about everyone having infinite knowledge at their fingertips? is this not a noble goal?
It might be, but Large Language Models (LLMs) are statistical, they cannot guarantee veracity. People say it "hallucinates" when it makes up shit, but the truth is that all it can do is hallucinate, it just usually hallucinates the right answer.

 No.23674

death to all glorified markov chains

 No.23675

>>23658
> When there's a subconscious thought of "killing the boss," their human values would prevent them from doing so.
Is that really "human value" or just something else trained into us?

 No.23677


>what are your stances on local language models (LLMs)?

They're funny, and a good tool for fighting against web crawlers.
>But what about everyone having infinite knowledge at their fingertips? is this not a noble goal?
We already have that, The Internet. As for using the AI to lossily compress web crawler output into a gelatinous search engine, no that's dumb. Literally a search engine but worse.
>What is the party line on AI and the proletariat?
While the proletariat continues to toil under capitalism, AI will be used to further exploit them. After that they'll be chill with each other.

 No.23678

LLMs suck. They're a huge regression of search technology.
I want rapid access into original texts, not fabricated texts which are just good enough to fool a Turing incompetent retard.
In other words the web at its academic and cultural peak. Not a simulation constructed from the scraped moldering shit of the old web.
It's also the worst tech bubble, with the worst shills since shitcoin.

>LLMs are the GOAT and they're going to BTFO the libtard artists.

The output in any field I'm competent in is actual nonsense.
>Granted, however they'll go exponential too the moon soon. AGI by next year at the rate they're improving.
No. They're limited by their datasets which are appropriated from the internet commons, and are more likely approaching the plateau of this particular technology.
They also over-represent information that the masses ENJOY posting, such as nudes, furry porn, and snarky reddit posts.
>No they're LEARNING just like humans and will be AGI by next year. Libtard artists BTFO and unemployed kek.

 No.23679

File: 1710105008684.jpg (65.37 KB, 550x750, captcha jenny.jpg)

>AI

 No.23680

>>23673
Sure smaller model sizes are more prone to hallucinations. But above 34 billion parameters the accuracy is significantly higher (to the point where some could get the original trained news articles to output by typing them out word for word and having the LLM complete it.
I just think there's an awful lot of noise in mainstream media fear mongering about this kind of tech. It makes me wonder if we're not really tapping it for everything it could work for yet. And perhaps if regulation comes, we'll never truly know how this could have supported our efforts. I would have expected the professor cockshott anons to show up here (granted it's been a while since I've lurked so dunno if folks still approve)

 No.23682

>>23680
My issue is that there's no difference between "hallucinations" and its normal functioning, therefore you can never trust its output. Worse: you can't even tell why it answered in a particular way, it depends on dice rolls.

 No.23683

>>23682
Also before someone takes this out of context, this is still about putting infinite knowledge at the fingertips" of everyone, it does not apply to all use-cases of LMMs.

 No.23685

I don't know where the "hallucinations" discourse started, but it’s a category mistake and gets used by tech adjacent people who should know better.

"Hallucinations" are like jpeg artifacts, in that they don't exist except perceptually. The jpeg is always actually made up of blocks and DCT weights, but if those ever become apparent/perceivable in the output, then as a shorthand we just say "this image has artifacts". Nobody is under the illusion that if we repair the artifacts we’re recovering original data. We make up stuff and we know it.

With LLM all intuition goes right out the window.
>LLM lays a smooth classic swirl poop
<Oh wow it's magic much ELIZA effect
>LLM blasts out lumpy diarrhoea
<Oh no a hallucination! These will surely be ironed out in ChatGPT 6.1

 No.23687

I like them alot, and am excited to see how it further drops the rate of profit.

>I know that closed models like GPT-4 are fucking stupid and late stage capitalism.

They have their usage, it's just demostrating the limitations of capitalism.
It can't handle it, despite it being a massive opportunity to develop technology in the name of competition.
It only really is frustrating since they don't function as alternatives but rather full replacements even for current situation.


>>23658
I'll say it: good post.
This problem sure is at the root of ai, but it really is just the similar limitation of relying on a living creature, where you need to work off of the probability built off of evidence to trust it.

Sure the ai won't ever be 100% on driving cars safely, but it'll still can be safer than humans – of course ignoring the grander point is that driving is at its limit and should be abondoned as the mass solution for transporation.

 No.23688

>>23673
>>23682
>there's no difference between "hallucinations" and its normal functioning
There is, it's called the right/expected result :) .
I don't really see the issue, regardless the information put forward by the machines was always gonna have to be cross checked since its development is inherently done with a bias.
(God I wish we had a mandatory philosphy 101 course in american high school, (or even middle school) – the amount of people who think information can be put forward without any bias or politics or whatever, is frustrating, but also explains so much of their thinking, including mine years ago :) )


>>23685
><Oh no a hallucination! These will surely be ironed out in ChatGPT 6.1
I don't understand the smugness of stuff like this.
Text to video was seen as a never thing years ago, it happened; the videos, like Will eating spaghetti, liking crazy, was said to be improved, "called out" as a never thing, yet it happened – like do people just not understand the concept of technology improving, or do they think they have a stem like proof of why it objectivly can't develop?
I get that stem people get overly excited and hype stuff, but then there's just this sillyness.

Reminds me of when I listening to the host from true-anon on chapo and deprogram, and he said that, "Crypto currency has never served a purpose", was was objectively untrue seeing their role in deep web markets, and today how it can be used to send money to others anonymously, like to the owner of libgen.
It's also a weird example since it can clearly serve a purpose of dealing with possible government tyranny – like if the goverment froze your account since you joing a communist org lets say – and it's just ignored.
The same people who act like this are probably the same people were massively into tech buy really hated linux for silly reasons.


I seperated the posts so it was easier to read (:

 No.23690

File: 1710188672321.jpg (134.07 KB, 1281x800, smith.jpg)

>>23688
"Bias" is idealist nonsense. This is just garbage in, garbage out.

STEMlords (as opposed to STEMtards who post on twitter about imposter syndrome) used to joke about how useless lossy text compression would be. Who'd want text that it sort of like the original text, but fucked up in all sorts of subtle ways?

Yet here we are. And models which dump out bullshit which passes so long as the reader is illiterate, is the basis of a billion dollar tech bubble.

This isn't really about information theory imo. Many bullshit jobs involve writing reports which will never be read, and LLMs fill this gap perfectly. As would lorem ipsum.

 No.23692

>>23688
>Chapo
The recent guest that was railing against AI didn't have a clue what he was talking about. The point you have about crypto is the same line of thinking for me in terms of LLMs. Recently models on huggingface with 'sensitive' content started getting gated - requiring emails and user information to be shared in order to do so. You would think there'd be a larger outcry or examination of why this is happening. But I think by the time folks like >>23690 realize how we could have leveraged these tools, it'll be too late.

 No.23693

>>23688
>Text to video was seen as a never thing years ago, it happened; the videos, like Will eating spaghetti, liking crazy, was said to be improved, "called out" as a never thing, yet it happened – like do people just not understand the concept of technology improving, or do they think they have a stem like proof of why it objectivly can't develop?
The thing is that ChatGPT is not that much more impressive them Eliza was with both being smoke and mirrors to give the illusion of intelligence. Ask ChatGPT to make a Wizardry clone in assembly and it will shit the bed, something modern programmers in the retro scene do just as a learning exercise yet ChatGPT can't because it has no concept of hardware or what the game Wizardry was or that it has to provide instructions for a CPU to step through to create a game.

 No.23694

>>23693
You're out of the loop anon. Local Mixtral is writing full HTML/gradio front ends for apps now. CodeLlama's been used to design an entire site. Millions of engineers around the world are already relying on services like GPT-4 and copilot to check their work. But the local scene is where we should focus on, as open source is in line with leftist objectives and capitalists will inevitably fight to keep everything closed as they always do.

 No.23695

>>23694
HTML is not assembly thus why I used it as an example. Assembly is hardware dependent so ChatGPT has to be aware of the target platform, NES is different the C64 even though they both use the same CPU. Assembly is also far less forgiving it doesn't take much in assembly for the hardware to start running unexpected code (basically running junk data because you are not where you suppose to be in address space). All this while ChatGPT still struggles with the concept of BASIC having different dialects that have different commands and syntax.

 No.23696

>>23695
the NES is also extremely temperamental and if you fuck up the timing of the VBLANK ISR then your screen will scroll and all kinds of other nasty display artifacts

 No.23697

>>23640
It's just another tool. I wouldn't worry too much about it nor place too much importance on it.

Reading Marx really gives you a lot of clarity in these situations.

 No.23698

>>23697
What is the Marxist perspective on automation? QRD?

 No.23699

>late stage capitalism
not a thing

 No.23700

File: 1710201882913.jpg (78.31 KB, 600x415, stal-cccp.jpg)

>>23699
I see late stage capitalism like late state feudalism where the system struggles more and more to reproduce itself. I mean can you image modern US capitalists drastically ramping up their industrial capacity in this late phase like they did in the later half of the 19th up to the Great Depression and like the USSR did early on?

 No.23701

>>23700
theres no stages to capitalism and thinking capitalism has a "late stage" just because of crises when crises were already pointed out as intrinsic to capitalism 2 centuries ago lol

 No.23702

>>23701
Then why has US capitalists stopped being the world's manufacture if not for US capitalism entering its senile stage?

 No.23704


 No.23708

>>23640
i think AI is dangerous, and will eventually cause the death of human meaning by automating away all creative thought. furthermore, almost no AIs publish their training datasets, and no licenses exist which require such a thing, making all of them inscrutable black boxes equivalent to proprietary software, and also security hazards, as it's been demonstrated that AIs can be backdoored during training.

 No.23709

>>23708
adding to this, AI is highly computationally expensive, and state of the art models generally scale linearly in capability with the amount of computing power you throw at them, making it impossible for proletarians to exploit the technology to the extent that the bourgeoisie can by building huge datacenters. it's basically a technology that inherently favors wealthy people (unlike say, guns, which are a social leveller)

 No.23710


 No.23721

>>23695
>Assembly is hardware dependent so ChatGPT has to be aware of the target platform, NES is different the C64 even though they both use the same CPU.

Exactly. And so in other words if someone opens ChatGPT and just says

>write me a Wizardry clone in assembly


Yeah, it’s not gonna work very well.

You probably have to do it much more step by step. And perhaps start the conversation with a general discussion on the platform you are targeting.

So for example, the first thing you should say to ChatGPT would probably be something like

>you are an expert NES developer. I come to you for advice on making a basic dungeon crawler in assembly for Nintendo Entertainment System. Let’s start by getting the environment set up and making a program for the NES that will output a single green rectangle to the screen


And take it from there. Don’t immediately jump to demanding it produce something complete for you. Have a conversation with it like you would with a fellow developer.

 No.23723

>>23721
You are giving ChatGPT too much credit it really has no concept of what a NES is thus has no way to filer out data it has for assembly from other hardware, even then it has no concept of the NES memory address space so it can't juggle memory space since the NES has no memory management thus all garbage collection has to be done manually by the code and the NES only has a tiny 2K amount RAM so as a programmer you have to git gud and juggle memory between registers, RAM and ROM thus why NES and GameBoy games has so bugs based on values read from an address being unrelated to the operation because if you get out of order that address represents another variable out of necessity. There is no way for stupid computer to understand the dark arts of real coding.

 No.23724

>>23723
>There is no way for stupid computer to understand the dark arts of real coding.

Cease your pestering, insect. Accept the coming of your new lord.

 No.23725

>>23708
tbh I would accept this if I didn't have to work for a living (I know this isn't actually gonna happen I'm just saying)

 No.23731

Just as SHODAN once said:
>Remember, it is my will that guided you here.
http://web.archive.org/web/20201110190620if_/https://www.seas.harvard.edu/news/2020/05/predictive-text-systems-change-what-we-write
This article shows, that even simple predictive text systems manipulate our thinking. And now AI-assistants are coming to tell you what to write, what to think and what to do.

 No.23732

>>23731
>people get lazy with autocomplete
wow!!!!!!!

 No.23733

>>23732
Let me guess: You are an autocomplete-user.

 No.23734

>>23733
my phone is too old for that dipshit

 No.23735

>>23734
Good. Then help spreading awareness about the issue.

 No.23736

>>23731
This article is even better:
>Writing with AI help can shift your opinions
>Artificial intelligence-powered writing assistants that autocomplete sentences or offer “smart replies” not only put words into people’s mouths, they also put ideas into their heads, according to new research.

http://web.archive.org/web/20240205045643if_/https://news.cornell.edu/stories/2023/05/writing-ai-help-can-shift-your-opinions

 No.23765

>>23721
>>23723
A language model could "write" a Wizardry clone for the NES but you would have to have previously fed it with a lot of Wizardry clones or very similar programs, so the usefulness of LMs still needs to be questioned as the only thing they can do is to copy and merge, people get excited when they see one solve a problem but ignore the fact that the way the answer was given was by looking at the results of people that have already solved the problem.

 No.23766

>>23765
LLM fail badly unless you handhold them the more novel your problem is. A big issue is that information in the context doesn't seem as strong as that in the training set when it should be moreso. I tried to get chatGPT to work with nonstandard VGA modes and it couldn't handle it, would always fall back to 640x480@60hz timings. The code was buggy as hell too, but I could mostly fix it with english prompting. I don't think any developer should be worried about LLMs taking tasks beyond those you could give to an average junior anytime soon. It can do a CRUD app, but deep domain knowledge is safe. Rule of thumb: if you could do it copy and pasting from stack overflow, chatGPT can do it.

 No.23779

>>23640
>What is the party line on AI and the proletariat?
AI is the new slave class and proletarians are the new plebeians.

 No.23793

>>23640
>the party line
Make your own conclusions from your own analysis, anon, Marxism is (at least supposed to be) scientific, not just another religion. Otherwise it's no better than being an ancap or a radlib or whatever other flavor-of-the-month ideology.


Unique IPs: 28

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / siberia / edu / hobby / tech / games / anime / music / draw / AKM ] [ meta / roulette ] [ cytube / wiki / git ] [ GET / ref / marx / booru / zine ]