China wants to dominate in AI — and some of its models are already beating their U.S. rivals (www.cnbc.com)
from schizoidman@lemm.ee to technology@lemmy.ml on 17 Dec 00:18
https://lemm.ee/post/49967660

cross-posted from: lemm.ee/post/49967612

#technology

threaded - newest

formergijoe@lemmy.world on 17 Dec 00:48 next collapse

“On Hugging Face, a repository of LLMs, Chinese LLMs are the most downloaded, according to Tiezhen Wang, a machine learning engineer at the company. Qwen, a family of AI models created by Chinese e-commerce giant Alibaba, is the most popular on Hugging Face, he said.”

Which LLM was this written by?

RizzRustbolt@lemmy.world on 17 Dec 06:46 collapse

Doesn’t matter, they’re all just 3 ELIZAs in a trenchcoat.

avidamoeba@lemmy.ca on 17 Dec 01:54 next collapse

And they’re likely to do so given how much of it depends on how much capital and energy you dump into it.

deadcream@sopuli.xyz on 17 Dec 04:43 next collapse

Isn’t AI a capitalist bubble profiting from stealing other people’s work? Why would a communist country engage in that?

dRLY@lemmy.ml on 17 Dec 05:58 next collapse

What we are constantly bombarded with in the capitalist hype train bubble is fucked. Just watching so many corps tripping over themselves to force shit into things that really and truly don’t need (or in shit like health insurance to auto-deny life saving care) is maddening. It is being used without concern over if it is even more than alpha stage. Just like how we see stuff like the games industry just launching games broken as hell as if they are completed and for full price. The stealing of work is very much in-line with capitalism, just as it always has been with regards to physical resources.

But it can have benefits beyond the above stated bubble cashing in and out. It could be very useful in socialist/communist nations. That is because under those systems, the ownership would be collective and benefits shared right back to everyone. But should be carefully applied and not for profits over lives. So really such advancements would make more sense in socialist/communist settings.

Also places like China actually do apply the death penalty to greedy leaders of companies. Which helps compared to the US and other capitalist nations that place those same greedy leaders on pedestals beyond true punishments.

falkerie71@sh.itjust.works on 17 Dec 06:12 collapse

death penalty to greedy leaders of companies

With the notable exception of people in power. (Edit: They do get executed) Corruption is real bad there, there are plenty examples of people in gov positions being filthy rich and have kids who show off their erroneous spending on Douyin.

vritrahan@lemmy.zip on 18 Dec 16:03 collapse

Government officials have also been executed for corruption, especially since Xi Jinping became general secretary.

falkerie71@sh.itjust.works on 19 Dec 01:49 collapse

I stand corrected.

falkerie71@sh.itjust.works on 17 Dec 05:59 next collapse

Why not? It doesn’t really matter who steals from who in the AI race, it’s about who gets to the top first to dominate the market. So of course China would want that, as do every AI company around the world.

FYI, China although communist in name, actually functions mostly on capitalist economy, but party authority always comes first.

queermunist@lemmy.ml on 17 Dec 06:29 collapse

FYI, China although communist in name, actually functions mostly on capitalist economy, but party authority always comes first.

In a capitalist economy the market comes first, the market dominates politics. In China it is the exact opposite, and that’s why China can build entire cities in the time it takes a capitalist country to build a single bridge or highway. They don’t wait for private investors to provide funding or wait for private contractors to bid for the project. They just decide they’re building a city and then they do it.

In other words, politics are in command.

falkerie71@sh.itjust.works on 17 Dec 15:00 next collapse

From my understanding, they do have contractors bid for projects. Though, fair and transparent or not is another issue.

yogthos@lemmy.ml on 17 Dec 15:49 collapse

Pretty much all the core economy is state owned, and the role of private sector continues to diminish piie.com/…/chinas-private-sector-has-lost-ground-…

humanspiral@lemmy.ca on 17 Dec 18:27 collapse

Capitalism is a useless word. Markets and property ownership (communal property is property) exist naturally. Oligarchism and Corporatism (supremacy of those classes) are narrowly defined words. US philosophy is industrial policy to maximize oligarchist/corportist supremacy and profits. Chinese industrial policy maximizes abundance, and overall policy restricts oligarchist excess. It directs massive job creating investment, but doesn’t interfere with market wages, as labour supremacist opposition parties in the west advocate for.

davel@lemmy.ml on 18 Dec 16:42 next collapse

What no theory does to a mf.

queermunist@lemmy.ml on 18 Dec 17:42 collapse

🤡

Delphia@lemmy.world on 18 Dec 08:41 collapse

Because AI can screen far more private personal conversations between potential dissidents than real people can?

utopiah@lemmy.ml on 17 Dec 06:46 next collapse

What an impressive waste of resources. It’s portrayed as THE most important race and yet what has been delivered so far?

Slightly better TTS or OCR, photography manipulation that is commercially unusable because sources can’t be traced, summarization that can introduce hallucinations, … sure all of that is interesting in terms of academic research, with potentially some use cases… but it’s not as if it didn’t exist before at nearly the same quality for a fraction of the resources.

It’s a competitions where “winners” actually don’t win much, quite a ridiculous situation to be in.

iopq@lemmy.world on 17 Dec 09:19 next collapse

Image gen did not exist in any way shape or form before. Now we’re getting video gen like a few years later.

Let’s not forget we started by playing the game of Go better. My prediction as a hobby Go programmer (the game, not language) in 2015 would be that better than human AIs would be there by 2020 and they got there by 2016.

Before the AlphaGo match with Lee Sedol people predicted the AI would just put up a decent fight since a previous version played questionably against a weaker player. It blew one of the best players ever out of the water, losing only one game of the series.

Future matches even against the world #1 with the better models showed it to be invincible against humans

You’re making the same mistake. You’re looking at the current capabilities and predicting a human speed of improvement. AI is improving faster.

utopiah@lemmy.ml on 18 Dec 08:11 collapse

Image gen did not exist in any way shape or form before.

Typical trope while promoting a “new” technology. A classic example is 1972’s AARON en.wikipedia.org/wiki/AARON which, despite not being based on LLM (so not CLIP) nor even ML is still creating novel images. So… image generation has been existing since at least the 70s, more than half a century ago. I’m not saying it’s equivalent to the implementation since DALLE (it is not) but to somehow ignore the history of a research field is not doing it justice. I have also been modding old.reddit.com/r/computationalcrea/ since 9 years, so that’s before OpenAI was even founded, just to give some historical context. Also 2015 means 6 years before CLIP. Again, not to say this is the equivalent, solely that generative AI has a long history and thus setting back dates to grand moments like AlphaGo or DeepBlue (and on this topic I can recommend Rematch from Arte) … are very much arbitrary and in no way help to predict what’s yet to come, both in terms of what’s achievable but even the pace.

Anyway, I don’t know what you actually tried but here is a short list of the 58 (as of today) models I tried …benetou.fr/…/SelfHostingArtificialIntelligence and that’s excluding the popular ones, e.g. ChatGPT, Mistal LeChat, DALLE, etc which I also tried.

I might be making “the same mistake” but, as I hope you can see, I do keep on trying what I believe is the state of the art of a pretty much weekly basis.

iopq@lemmy.world on 18 Dec 12:59 collapse

Creating abstract art by moving pixels around is not anywhere close to what we mean by image generation. At no point did this other software generate something from a prompt

utopiah@lemmy.ml on 18 Dec 13:43 next collapse

in any way shape or form

I’d normally accept the challenge if you didn’t add that. You did though and it, namely a system (arguably intelligent) made an image, several images in fact. The fact that we dislike or like the aesthetics of it or that the way it was done (without prompt) is different than how it currently is remains irrelevant according to your own criteria, which is none. Anyway my point with AARON isn’t about this piece of work specifically, rather that there is prior work, and this one is JUST an example. Consequently the starting point is wrong.

Anyway… even if you did question this, I argued for more, showing that I did try numerous (more than 50) models, including very current ones. It even makes me curious if you, who is arguing for the capabilities and their progress, if you tried more models than I did and if so where can I read about it and what you learned about such attempts.

iopq@lemmy.world on 18 Dec 15:49 collapse

It’s irrelevant because it wasn’t a precursor technique. The precursor was machine learning research, not other image generation technology

jacksilver@lemmy.world on 18 Dec 19:29 collapse

So LLMs can trace their origin back to the 2017 paper “Attention is all you need”, they with diffusion models have enabled prompt based image generation at an impressive quality.

However, looking at just image generation you have GANs as far back as 2014 with style GANs (ones that you could more easily influence) dating back to 2018. While diffusion models also date back to 2015, I don’t see any mention of use in images until early 2020’s.

Thats also ignoring that all of these technologies go back further to lstms and CNNs, which go back further into other NLP/CV technologies. So there has been a lot of progress here, but progress isn’t also always linear.

iopq@lemmy.world on 19 Dec 05:36 collapse

You can see with image generation progress was extremely quick

<img alt="" src="https://lemmy.world/pictrs/image/8d7df9ab-4e5f-4217-930f-49d2bedc6fb8.jpeg">

yogthos@lemmy.ml on 17 Dec 19:02 next collapse

This sort of stuff has been said about pretty much every technological breakthrough in history. Language models on their own do indeed have lots of limitations, however there is a lot of potential in coupling them with other types of expert systems. We simply don’t know what all the potential applications are for this tech. However, the iron rule throughout history has been that people dismissing new technological developments have typically been proven wrong.

utopiah@lemmy.ml on 18 Dec 08:14 collapse

Language models on their own do indeed have lots of limitations, however there is a lot of potential in coupling them with other types of expert systems.

Absolutely, I even have a dedicated section “Trying to insure combinatoriality/compositionality” in my notes on the topic …benetou.fr/…/SelfHostingArtificialIntelligence

Still, while keeping this in mind we also must remain mindful of what each system can actually do, not conflate with what we WANT it do yet it can not do yet, and might never will.

yogthos@lemmy.ml on 18 Dec 14:52 collapse

Sure we have to be realistic about capabilities of different systems. Thing is that we don’t know what the actual limitations are yet. In the past few years we’ve seen huge progress in terms of making language models mode efficient, and more capable.

My expectation is that language models, and the whole GPT algorithm, will end up being a building block in more sophisticated systems. We’re already seeing research shift from simply making models bigger to having models do reasoning about the output. I suspect that we’ll start seeing people rediscovering a lot of symbolic logic research that was done back in the 80s.

The overall point here is that we don’t know what the limits of this tech are, and the only way to find out is to continue researching it, and trying new things. So, it’s clearly not a waste of resources to pursue this. What makes this the most important race isn’t what it’s delivered so far, but what it has potential to deliver.

If we can make AI systems that are capable of doing reasoning tasks in a sufficiently useful fashion that would be a game changer because it would allow automating tasks that fundamentally could not be automated before. It’s also worth noting that reasoning isn’t a binary thing where it’s either correct or wrong. Humans are notorious for making logical errors, and most can’t do formal logic to save their lives. Yet, most humans can reason about tasks they need to complete in their daily lives sufficiently well to function. We should be applying the same standard to AI systems. The system just needs to be able to function well enough to accomplish tasks within the domain it’s being used in.

geneva_convenience@lemmy.ml on 18 Dec 17:20 collapse

LLM’s are great at generating boilerplate code or sending you in the right direction.

Coreidan@lemmy.world on 17 Dec 14:14 next collapse

AI in its current form is a joke.

A nothing story.

humanspiral@lemmy.ca on 17 Dec 17:51 next collapse

A bigger deal is that they are open source.

China does have a “delete America” program, and I saw an announcement, not repeated in west, that they banned Intel, AMD, Nvidia for certain Chinese customers about 2 weeks ago. Nvidia still has a big interconnect, CUDA, and driver stability advantage in GPUs, but China is catching up on price/performance, and national security makes this an area of high investment. A Chinese Nvidia solution is to just host AI training in Thailand. Hosted costs for AI in China are 1/7th the cost of cheapest US option (Llama).

Extreme US climate terrorist energy corruption is going to limit US datacenter and affordable electricity for all. Mexico has an opportunity for electricity exports and datacenter hosting with access to Chinese energy.

tetris11@lemmy.ml on 18 Dec 15:09 collapse

Extreme US climate terrorist energy corruption

expand on this please

humanspiral@lemmy.ca on 18 Dec 16:14 collapse

Incoming Administration promises to protect oil and gas oligarchs by limiting renewables/batteries/EVs, including potentially destroying big 3 automakers with Canadian/Mexico tariffs. Harming Tesla competitors with loses on their NA investments in batteries and EVs, that includes tariffs on JPN ROK batteries. Coercion of EU colonies is likely to force their dependence on extortionist US oil and LNG imports if they want to keep US in NATO, for the privilege of buying extortionist US weapons. NATO CIA stooge leader is telling EU that they need to give up pensions, social security, and healthcare, and institute drafts, to get to 3% of GDP military spending targets. EU blaming immigrants from US sponsored destroyed lands, instead of their own US subjugation, is certain to be the political climate for abandoning sustainability and prosperity.

Coercing US consumers on NG electricity expansions, with increased LNG exports, together with datacenter expansions, is attempt at boosting NG prices, even when $2/mmbtu is not competitive with solar for new electricity. Utility oligarchs are certain to feed better at the trough if they support NG extortion agenda instead of renewables.

The main point is that in addition to cheaper energy, outside of US extortionist energy pricing, foreign datacenters also allow for global customers.

Mandy@sh.itjust.works on 17 Dec 18:16 next collapse

<img alt="" src="https://sh.itjust.works/pictrs/image/a4374893-b92c-4ba6-91b1-3f3cdd15b90a.png">

Mango@lemmy.world on 18 Dec 16:26 collapse

Wake me up when AI isn’t garbage.