Asus plans on-site ChatGPT-like AI server rentals for privacy and data control

This seems like it could be a tricky sell.

I don't doubt that there's a market among people who will absolutely pay a considerable premium to keep data on premises; along with demand from people who want really expensive GPUs delivered as opex rather than capex; but "will be owned and operated by Asus" means that, regardless of the hardware being in your DC, the system, and your data, is under Asus' operational control, which puts it in basically the same category as an AWS Outpost or an Azure Stack; if you must have the hardware onsite; and not substantially different than Microsoft's proposed "yeah, we'll run it; but isolated, really" offering.

Is Asus Cloud more of a thing in Taiwan, and this is basically AWS Outpost/Azure Stack; but for the Asus analog? Is it not really a thing but Asus is trying to change that? Are there specific regulatory requirements that create a pool of customers who must have the hardware where they can see it; but aren't required to actually maintain operational control of it?
 
Upvote
35 (35 / 0)
I can’t wait for bad helpdesk advice
Oh, kinda like what happened with the The National Eating Disorder Association when they replaced all their advisors with a chatbot on Monday only to have to completely scrap the whole thing today, because it really shit the bed so bad?

At least the rest of us get some entertainment out of it.
 
Upvote
39 (40 / -1)

Sarty

Ars Praefectus
7,238
Subscriptor
For the sake of argument, taking the product at face value, how much usage does a $6k/mo box support?

One connected user at any one time? Twenty simultaneous "threads" (I have no idea how computationally expensive the generation process is--it might take nonzero time)? Does a single box support an office tower with a thousand people?
 
Upvote
25 (25 / 0)
Oh, kinda like what happened with the The National Eating Disorder Association when they replaced all their advisors with a chatbot on Monday only to have to completely scrap the whole thing today, because it really shit the bed so bad?
That sounds like a good topic for a Benj Edwards x Beth Mole collaboration article. If only some website somewhere would do that.
 
Upvote
11 (12 / -1)

Korios

Ars Scholae Palatinae
992
The servers will be powered by Nvidia's A100 GPUs and will be owned and operated by Asus.
I get the "owned" bit, since this is a rental service. But why on Earth would they also require to "operate" them on top of that? So what exactly will their clients do?

And what if their business model is incompatible with another company "operating" the servers they've bought or rented? Why should they trust Asus with their potential trade secrets and proprietary code?
 
Upvote
8 (8 / 0)
Oh, kinda like what happened with the The National Eating Disorder Association when they replaced all their advisors with a chatbot on Monday only to have to completely scrap the whole thing today, because it really shit the bed so bad?

At least the rest of us get some entertainment out of it.
Never undersell shitting the bed. Sometimes it's nice to relieve yourself with no regard for the consequences.
 
Upvote
7 (7 / 0)
"Hi we're ASUS and we've seen what Nvidia are doing and we're feeling left out. Please! Please! Please! Buy our stuff! Here we have AI and everything! Just don't forget us!"
It seems like Ars's coverage isn't substantially different from Asus's business messaging expressed in your satire.
 
Upvote
0 (3 / -3)
This is the type of appliance needed in secure settings that I work in. On-prem model hosting services would be pretty cool to have
Regulatory compliance, I get it why somethings can't be "in the cloud". But lost in all this hysteria (and it is definitely hysteria, not rational discourse) around chatGPT, Bing's AI, Bard, etc. is an actual use case for businesses where these products are a net positive, rather than a net liability.

You have these models spewing out complete fabrications to 40%-80% or more of the questions being asked depending on the methodology and LLM being used. You've got massive security and privacy implications from faked correspondence and inadvertent disclosure of confidential information (moving the computation on prem doesn't change that problem, it only changes who sees it - not that they shouldn't see it to begin with). Most people are very credulous and believing just about anything these models are putting out - they aren't bothering to fact check, or worse, letting the LLM fact check itself. That's like letting Donald Trump enforce his own contracts, "I didn't do it! That's a fake recording of my voice!" Fact checking takes time. Time is money. It's easier and cheaper to do the research yourself or employ a research assistant (like paralegals - that's their job) than to ask GPT, Bard, etc to produce a summary then go and find out it's made something up whole cloth.

These models are not intelligent. They aren't intuitive. They're just advanced fuzzy statistical and logic models. All they're doing is spewing back the filth the Internet taught them we as a species want to hear statistically speaking. It's no wonder these models are inventing stories the majority of the time. Any data generated by humans is going to cause this kind of going off the rails. The only way to fix it is to allow future systems not yet created to gather raw data not already "interpreted" by humans.
 
Upvote
4 (6 / -2)

dEvErGEN

Smack-Fu Master, in training
69
an actual use case for businesses where these products are a net positive.

Marketing and customer service, perhaps… AI isn’t ashamed to lie, and has no real cost, so of course that’s what they use to attempt to make the end-user experience better… Need to smooth over a pile of flaming shit and say all is well? AI has you covered.
 
Upvote
1 (3 / -2)
Honestly, I wonder if it will be better than the prompt scripted support I get now. My last dealings with Comcast were I said "I want to cancel my account and pay my final bill" and was sent to s '"specialist" who dealt in canceling accounts, which almost "ended with your services has been restored" but when I said it better not have because I'm calling to cancel everything and pay my final bill, They then said "You account has been canceled and paid your final bill." Of course, next month, I got a new bill and had to talk more people for about a month before my account was canceled and I had paid my final bill.
 
Upvote
6 (6 / 0)
Honestly, I wonder if it will be better than the prompt scripted support I get now. My last dealings with Comcast were I said "I want to cancel my account and pay my final bill" and was sent to s '"specialist" who dealt in canceling accounts, which almost "ended with your services has been restored" but when I said it better not have because I'm calling to cancel everything and pay my final bill, They then said "You account has been canceled and paid your final bill." Of course, next month, I got a new bill and had to talk more people for about a month before my account was canceled and I had paid my final bill.
The way I see it, it will manage to piss off twice as many people, as previously possible. It will piss off the "Let me speak to a real human!" folks who hate automated messages while simultaneously pissing off people who want quality support because the AI will still only be on the level of "have you tried rebooting?" for a while, and hell it will probably piss off the call center reps who lose their low-paying jobs because a machine will do it without complaining about quotas or needing sick time.
 
Upvote
4 (4 / 0)
Marketing and customer service, perhaps… AI isn’t ashamed to lie, and has no real cost, so of course that’s what they use to attempt to make the end-user experience better… Need to smooth over a pile of flaming shit and say all is well? AI has you covered.
False advertising is a crime in most countries, regardless of how often it's enforced.
 
Upvote
2 (2 / 0)
Rent is not own, even if it is inside your walls. High level employees accumulate knowledge of trade secrets, approaches, designs in the process of working for a business and sign NDAs. If all this thing learns is the patterns your company judges to be good, it is accumulating knowledge of trade secrets in the process of helping in the business as a tool to the high level employees. It needs to be owned and locked up, just like servers containing trade secrets and filing cabinets containing trade secrets.

If all it is doing is replacing the lowest level functions, like spamming/fishing the world, replacing a contracted-out call center, or help desk, then the cloud full of corporate reservoirs of customer personal data is its natural habitat.

I do not see the upside of renting something that learns.
 
Upvote
5 (6 / -1)
The way I see it, it will manage to piss off twice as many people, as previously possible. It will piss off the "Let me speak to a real human!" folks who hate automated messages while simultaneously pissing off people who want quality support because the AI will still only be on the level of "have you tried rebooting?" for a while, and hell it will probably piss off the call center reps who lose their low-paying jobs because a machine will do it without complaining about quotas or needing sick time.
I think companies will quickly figure out that corporations calling for support aren't going to tolerate that any more than they did moving call centers to Asia. OEMs with business and government customers quickly backtracked on that. In some cases consumer support centers were moved back to native language support centers as well (From India or The Philippines to the US, or back to Europe, etc). I doubt many people are going to tolerate chat bots spewing out invented bogus solutions. I'd say they won't tolerate scripted responses either, except I know better, because that's what T1 consumer level support always is, but usually this is why customers get irate as well - once they actually have to deal with support, but most people don't think about that till after they already have a problem and by then it's too late.

Moral of the story, buy corporate/business level products and support contracts if you want decent product support. That has been true for the past 30 years. Delusional fuzzy logic models won't change that.
 
Upvote
4 (4 / 0)
Eh... not from asus, I don't think they have the trust right now.
I dont think they are wrong on the business case, but this feels like a half-step into the real value which requires real integration.
MSFT actively pitches the access to internal context as critial for their offerings, the more you have available for the AI to reference for context, the more usefull stuff will be. Coding a solution with knowledge of your company's codebase is a simple start - and you would want it to have context of the current product and code.
That may come out to well over the 10k/mo price, almost certainly for any real size company, but if you want to make an AI investment - this feels like dipping toes into the water to address concerns of queries being public...not optimizing to increase worker efficacy.
 
Upvote
0 (0 / 0)

robrob

Ars Tribunus Angusticlavius
6,518
Subscriptor
Formosa is an interesting name choice.

I get the "owned" bit, since this is a rental service. But why on Earth would they also require to "operate" them on top of that? So what exactly will their clients do?

And what if their business model is incompatible with another company "operating" the servers they've bought or rented? Why should they trust Asus with their potential trade secrets and proprietary code?

The Bloomberg article mentions a big part of it is data refreshes, so it will be helping to build the training models for their clients and keep it up to date.

It mentions key targets are places like banks, healthcare and law firms, so it's not just a box that runs a chatbot, it's providing all the software services to actually tweak the model that they need. Which is a fairly valuable service and not many places would have inhouse talent to do that.
 
Upvote
0 (0 / 0)
Oh, kinda like what happened with the The National Eating Disorder Association when they replaced all their advisors with a chatbot on Monday only to have to completely scrap the whole thing today, because it really shit the bed so bad?

At least the rest of us get some entertainment out of it.
As clarification, from a quick look at the media coverage it's clear that the National Eating Disorder chatbot was a simple device and not based on an LLM. I am not saying that using an LLM-based chatbot would have been a good idea, just that this isn't an instance of that happening.
 
Upvote
0 (0 / 0)
Oh, kinda like what happened with the The National Eating Disorder Association when they replaced all their advisors with a chatbot on Monday only to have to completely scrap the whole thing today, because it really shit the bed so bad?
Replaced all their advisors because they voted to unionise. Can't forget that part of it.
 
Upvote
3 (3 / 0)