(09-25-2023, 08:38 AM)Eric Cartman wrote: So of those, an AI could theoretically eliminate bureaucratic inefficiencies, because it would be capable of - for all intents and purposes - real-time data collection and wouldn't need to wait for someone to file a report because it would be continuously filing its own reports to itself. It also obviously wouldn't put it off because its bad news, couldn't be fucked because its a long report and its friday afternoon, or all the other ways information in a system gets delayed.It wouldn't be and this is the central problem with central planning: that data essentially can't be acquired and used. Human wants are relative, ranked and do not use consistent values. Aggregating them for millions or billions makes the data even more useless. Even updating instantly the data is always old and can never tell you about the future. There's no place for risk/reward because everything would be already accounted for consumption, that's the entire point. This is assuming that it can still make decisions about necessary resources when there's no values so "efficiency" and "waste" can't even be determined; the plan may call for some goods to be produced when none of the materials necessary to produce those goods are available either because they don't exist or have been used up for something else. (Not to mention labor distribution if nothing can be produced by some.) Maybe a better and less wasteful use for them hasn't happened yet and never will because the plan doesn't account for it. That access to all available research assumes that research was planned for in the first place, how is the central plan to value potential research into something that doesn't exist? And how is it supposed to value that relative to the pre-existing demands for resources in the central plan of which there are no excess resources? If you write the plan to deliberately create excess that can be used to speculate by the elite, AI or class, then you basically just reinvented feudalism without the noble's duties. The constant trial and error sped up by instantaneous data would still require the constant reworking of the entire plan which effects every component of it everywhere.
It also wouldn't be a suck-up idiot that doesn't really understand the job its doing - it would pretty much be a pre-eminent expert on whatever its doing, because it would have access to all available research at any given state of peer review or publication the moment anything was entered into its database. It could not only be better than any human in terms of available knowledge, it would be more capable than any human physically could be (this is the whole concept of the singularity where AI can basically solve everything because its effectively omniscient).
I mean, yes, of course, if we assume the AI could hypothetically keep track of everyone's values and was fast enough it could potentially manage this but 1.) it would depart from central planning into resource distribution (which is what the Soviet Union and China both essentially turned to and then rewrote the Five Year Plan based on what had actually been done), 2.) would still have the problem of the future and 3.) we have a much more energy efficient solution that doesn't require V-Ger (and might spawn The Borg) already.
1 user liked this post: