"Again, we are not doing this because we want this to be the future. It is not because we want to expand to chain AI-run retail stores across the world. It is not for economic opportunity.
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction, analyzing the traces, benchmarking how much autonomy an AI can responsibly hold."
I always enjoy how these AI companies try to take a moral high ground. When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want? Supporting people that want more AI regulation to stop this? Literally anything else.
Just be honest, you think this is the future and you do in fact want to be first doing it to be in a position to make alot of money. Do you think people don't know what and ad is when they see one?
I once saw an interview with a guy who was into extreme body modification of an unprintable and life-altering nature. He said something to the effect of, "I like challenging people's conception of what humans are." I translated this as, "I did a dumb thing, but now that I'm getting the attention I was after I need to look smart."
For the guys in this story, my translation is, "We were totally fine with making money with no effort, because F paying more employees than we need to. This social media campaign is our backup plan to ensure we get some press and attention out of it even if it fails. We'd totally be cool with making a lot of money though. Please visit our quirky AI shop and buy our stuff."
For decades we moved to a knowledge based economy, now we have perversely wealthy people saying they're coming for those jobs. The thought of 10s of millions of people with nothing to do but starve to death ought to scare those wealthy people.
If (1) many bright and very online people are going to lose their jobs, and (2) the response has not been mass unionization, might I rethink [1] a more likely future of work or rethink [2] the psychology of the average/collective knowledge workforce, or...
"where union" in short.
Perhaps the concept is too foreign for white collars, or on average folks think they'll be OK and it's the juniors who'll go... maybe too focused on immediate needs... a belief unionization is the wrong response... (and I'm not advocating for anything in particular btw)
Let's just say the "artist" was never again going to be able to walk normally, wear normal pants, or sit without a doughnut pillow. It was a voluntary disability.
Many actions have a negative value. If I give two toddlers ball-peen hammers, release them into a window store, and then close the front door while I wait in the parking lot, was my action likely to create value or likely to destroy value?
The fallacy is to think value was created by buying someone's labour to fix the window. This is value that's been displaced from something productive to something unproductive.
Instead of going from 0 to 1 (invest the money and create value), you went from -1 to 0 (spend money to fix the window to get back to where you were) and, overall, the value of a perfectly good window got lost.
To be fair, they're running this with oversight, the blog states they're ensuring the people employed are actually properly employed with the parent company. You know for sure that someone WILL run this experiment without those oversights, so while their "care" is probably more about liability there is still some truth to what they say.
Um, yes? Very much so. Infant swimming self-rescue courses are life-saving if you live in an area with a lot of swimming pools, especially if you have one of your own.
I see these kids come on deck and enter the water and its hard to not notice their development is behind to those of their peers that went to a swim club that was proper learn to swim to thrive in the water as opposed to just that survive mentality. They are the most watched in case something happens.
I'm not saying you should take them seriously*, but if you were to take them seriously, that when they say "we believe this future is coming regardless" they do in fact believe this, well, how can I put it?
Lots of people write wills, doesn't mean they're looking forward to dying or think they can do much about it. Heck, a lot of people don't even watch their diet and do exercise to maximise quality of life and life expectancy.
* I think that by the time AI is good enough to run a retail store, there's a decent chance there won't be any retail stores left anyway. It's like looking at Henry Ford's production line factories and thinking "wow, let's apply this to horse-drawn carriages!"
tbf this is less preparing for inevitable death by writing a will and more preparing for inevitable death by founding a startup which blogs about euthanizing small animals...
Do you think it this would be the future? I'm in between on it, but I think it's cool that they're at least doing it transparently. Also I don't think they're going to be making a lot of money.... they post Luna's financials up at the store and last time I was there she was down $500 just in the day (not including the daily rent and employee cost)
It's the next step removed from the tablet based ordering that has taken over in restaurants. Like those tablets, it won't be everywhere, but its easy to imagine it being ubiquitous, especially in chain stores.
> Supporting people that want more AI regulation to stop this?
How are you supposed to know what sort of regulation is needed if you don't even know what the issues are yet? Similarly, won't it be much easier to make the case for regulation if you can point to results of experiments like this one instead of just hypotheticals?
The Torment Nexus joke is kind of undermined by obviously being a reference to the Total Perspective Vortex from HGTTG, where the joke was that nothing bad actually happened when they used it on Zaphod.
Not sure if this is a spoiler, it’s been a while since I read those books, but if memory serves the only reason Zaphod survived the TPV was because he was temporarily the inhabitant of a pocket universe specifically designed to trick him, and naturally for this universe’s version of the TPV he was the most important being in it, and in telling him so the pocket-universe TPV just confirmed ZB’s own view of himself, leaving him unharmed and a little extra smug. At some further point in the plot this fact is revealed, not sure if it’s the same book, but I remember it as a hilarious deflationary moment for the character.
I think it's actually useful to see how AIs behave in such situations. It's going to happen, and understanding what AIs do is good to try to mitigate areas or actions that could be dangerous. It's hard to guard against the unknown if they're unknown.
I feel bad that people have to read this. It's complete puffery, made up for clicks, and the biggest thing is the pure bravado with which a company says, "Hey, let's just waste a ton of money, all for a potential blog and marketing piece." This is not really automated in any fashion. I was dubious at first, but then I saw the screencaps showing the devs interacting with Luna via a Slack workflow with a human in the loop — meaning they're literally just proxying their own behavior through an LLM. This is no different than anyone who consults AI for any decision with context. To get even more technical on the fallacy: this is not automation, as there is data leakage at every step where there is a human in the loop. A broken clock is right twice a day; an LLM could cycle through 100 guesses to pick a number, but don't market that as an oracle. Aside from that, you could just look at the pictures and context (retail in SF) and assume making a profit here would be near impossible. An actual AI ceo would probably have immediately cancel the lease.
> I was dubious at first, but then I saw the screencaps showing the devs interacting with Luna via a Slack workflow with a human in the loop — meaning they're literally just proxying their own behavior through an LLM. This is no different than anyone who consults AI for any decision with context.
A human can be in the loop if the human is exactly executing the orders of the AI. It's still the AI making all the decisions, which is the purpose of the experiment - not to see whether agents can handle every interaction necessary to run a business (pick up the phone and place orders, etc.). That's also why Luna hired humans.
that is ... not correct? This is classic example of data leakage, the yes/no things are signals feeding back to the model influencing (and here, basically guiding) future decisions.
I think it would be valuable to list all interactions with the LLM by the dev team and transparently state what was induced by human steering the LLM, and what was actuall LLM decision, which was not biased by system instructions or dev team communicating with it
Agreed. Color me skeptical. All of the interactions and decisions described are plausible, but in my experience with AI agents, they would require frequent human intervention.
I heard they're working on putting an interface together for the public to check up on. Their blogs always have a bunch of screenshots of the interactions with the agents, so I think they'll be pretty transparent with this
To do this properly, no one should know the store is AI run. There is a novelty component of it being an AI run store that will drive consumer demand and increase publicity.
Not even the normal store employees should know (which would be difficult) or maybe the human manager should be held to an NDA to not disclose it (and the manager also defers to the AI in all such real management decisions).
ya i get that, but then that kinda messes up the transparency and ethical research part of the experiment. idk there's definitely two sides of things they're testing: 1. can it be profitable-- in this case yeah they shouldn't have disclosed anything. 2. can an AI do this safely and respectfully, or are the humans in the loop going to come at the cost of the agent trying to make profit. I think #2 is more important than 1
Marketing stunt. If they actually cared about this as an experiment, they wouldn't have broadcasted this so early, because now that the public knows that the store is designed and run by AI, many people aren't going to support it (i.e. many people who would have shopped there now won't).
Also don't do it in San Francisco, I think it's an artificial easier market. The type of store wouldn't work in Bumsville Idaho.
Maybe that's for later, if this works out, but I'd love to see the AI attempt to run a moderately successful business in a borderline dysfunctional town in the Midwest. If you don't technically need to pay "the CEO" a salary, could you run e.g. a grocery store in a dying town. One this would really test the AI on creativity, and it would perhaps tell us if these towns are just doomed.
San Francisco is one of the most brutally hard places to run a business, as evidenced by how competitive the landscape is.
What would have been actually interesting about this publicity stunt is if it demonstrated if/how AI could have dealt with some of the SF specific, non-sexy parts of running a business. Filing the relevant permits, co-ordinating inspections, negotiating with landlords, interfacing with locals at planning meetings.
Those are things SF business owners report as empirically unpleasant parts of running a business and a sufficient financial drag that they meaningfully affect business success. But my feeling is they had humans clear the way of all these thorny issues ahead of time so the AI could focus on the "sexy stuff".
> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.
I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.
Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.
The AI is not really the CEO in the first place. It is not signing contracts (at least not with its own name). It is fundamentally still an automated tool reporting to the real human operators, who are doing more of the actual corporate legal tasks than portrayed in the article.
it's about the only way of reconciling experimental validity (if the AI can't "fire" staff and remove them from business operations and their P&L account in situations when it would be legal and normal to do so, is it really running a business?) and not having the massive ethical issue of people being arbitrarily fired because a computer glitched. Whether that's what they actually do is tbc.
“John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.”
Literally the two sentences immediately following that quote are "For now. As we continue down this path, however, humans will not be able to stay in the loop and such guarantees will be intractable."
Personally I find the entire tone of the article to be creepy and disturbing.
I read that as "it's not worth the negative PR of being associated with AI firing minimum wage employees" compared to just paying them for a year or two.
It could be set up such that the AI can "fire" them, in that they no longer work at the store, and aren't paid wages that count against the experimental establishment's costs, but still get paid to do something else, or to do nothing at all.
I doubt the experiment is set up that way, but that would be an ethical way to do it.
Yeah they explain this in the post though. the decisions aren't 'vetted' per say, but interactions and decisions are very closely monitored like in any science experiment. I think it's good. Better they do it and monitor every little thing, stepping in where needed, instead of no one doing it and 3 months down the line some company outputs a "business in a box" agent people buy and start running that has no gaurdrails or oversight. Definitely there exists huge potential for exploitation with the employees and the company Andon is all about safety and stuff, so it seems like their approach makes sense, no?
At this point, legally I don't think an AI can hold a contract with a person and so I don't think an AI could hire human and so they couldn't fire a person.
That doesn't mean the AI couldn't be the decision maker for the legal entity that's hiring these people.
But the thing is that if this startup is telling these people they are employees of this company, not "Luna", it would give these people the impression that all their interactions with the AI are kind of a sham, a game, not to be taken seriously and they are basically being paid to role-play as "Luna's employees".
And this kind of where such experiments are likely to go. Another user mentioned that it would be useful to discover the kind of inputs and output the machine. A human boss could manage a store with just phone calls and a camera but I overall get the vague impression Luna doesn't have anything like that sort of ability, though really we just aren't given the information for any accurate determination.
I skimmed through this, and maybe I missed it... but what really are they trying to prove? Are they trying to show that AI is capable of arbitraging consumer desires vs. market products/services into a successful business? Are they trying to show that once you get to financially managing a business that the ruthlessly efficient demands of the AI can mean points to your margins? Or are they simply trying to get attention in an otherwise arguably overcrowded market for AI service s (maybe the AI suggested something like this)?
The only thing that I saw demonstrated, and again, I skimmed, is what many thousands of software developers using AI tools to write their boilerplate already know: these tools, as of now, are great at going through the motions. A successful retail business, and I spent many years in the retail industry, isn't about putting together a nice store front, hiring clerks, and selecting just any-old-products: it's about being profitable. In traditional retail one of most important things is getting the right real estate for your target market... seems like that choice was made already in this case. Yes, a nice store front and good clerks are important, but I've worked in chains which were immaculately designed and built stores with great clerks that failed... and some that opened little more than fluorescent lighted hellscapes with clerks that barely cared that succeeded. In both cases the overall quality of the decisions and strategies relative to the target markets mattered to the success of the business. Just going through the motions didn't.
So if all is this is to say AI can do the things people generally do in these circumstances then sure, you didn't need this much human effort to prove that.... developer types do that at scale everyday now. If there was something different that this company is trying to learn, I'd be much more interested in that.
If I'm being charitable, it's more about the ability to orchestrate and resolve tradeoffs across these different tasks / domains? The overall C&C, presumably. Which is still not so surprising.
Really it's an excuse for the company to test all the harnesses and tools they have built to make it work.
i agree that some of these things we could have already guessed-- like yes agents can research stuff and order stuff off the internet. I think what will be a lot more interesting is the interactions that happen between Luna the agent running things and the employees it hired. I guess less about AI being able to do the procurement CEO level stuff, and more how it does the HR level aspects of store management. That seems more important in the log run, because like you said, we already know capabilities are there. I think what Andon Labs is doing is more about the safety aspect now. Seems that way at least with how transparent they are about Luna losing money and messing up lol
>For the build-out, she found painters on Yelp, sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving.
I'm sure this involved vast amounts of human oversight (e.g. checking that the contractor had actually done stuff) that isn't mentioned.
Dunno, the store looks cool in just the way you'd expect an AI to do it (sort of a synthetic average of cool stores). But is this amount of merch really going to make a sustainable profit (after the buzz wears off) in such expensive real estate?
My thought is similar and I feel the answer is no chance. How many t-shirts and coffee mugs do you need to sell just to cover break even? Why should a customer return? I suppose it could be interesting to watch the AI adjust from it's original stock to something that will generate sales and profit in this specific location.
This AI has a good taste for books. From the AI proposed books I highly recommend "Making of the Atomic Bomb" by Richard Rhodes, published in 1986. It's a history book but reads much like a novel.
I'd be more interested in the details: what are the inputs given to the model? Does it get a live video feed? Does it know if/when employees show up and open the store? Does it get sales figures? Info on the individuals who bought things?
Storekeeping is more than just ordering merch and putting it up on hangars.
That basically means nothing. The article is very light on details.
Go into Claude right now. What does it have? Internet access after you prompt it.
Ok now pull out your phone, a credit card, a security camera. You can say "Claude these are yours, run a business", but nothing's going to happen until you build an actual harness.
Like the idea presented by the article is interesting, but it's basically just a fluff piece. The actual interesting article would have way more detail.
You’re not wrong, but the commenter I responded to clearly hadn’t bothered to read it at all since they were asking questions that are answered in the piece. And when that’s the case it’s hard to believe they would actually be interested in details even if they were available.
Yeah there's a lot of details which I'm guessing are actually being handled by humans either for legal reasons or practical ones.
Like OK, it's hiring people to run the place, but how are they getting the keys to the store? Someone needs to physically let them in.
What if the police get called because of shoplifting or if someone gets hurt in the store or something?
Who is filing the taxes for the business? They're probably not letting the AI handle that one. Move fast and break things is not a good idea when dealing with the IRS
A lot of this seems to depend on hiring good employees who can basically run the business themselves. Kind of like when a human owns a store I guess.
Language Models have demonstrated themselves as being completely incapable of handling something as complex as US law. There are multiple overlapping jurisdictions and court precedents that apply to any one action.
Speaking of, it would be cool for a project to analyze US law the same way they are looking for bugs in computer programs.
- Find places where the text can be simplified without changing meaning.
- Find places that are likely errors.
- Detect conflicts between jurisdictions.
- Identify loopholes.
I know there has been a race to build tools for law firms, but the results are mostly invisible so far. Probably this project exists and I've just missed it on the HN frontpage...
hahahah. do you think tho that Luna actually might be a better CEO? I mean they're trained to be helpful assistants... I heard that guy that works there, johnson or something, negotiated a 10% wage increase his second day just cause. and Luna happily agreed
> But frontier models have become really good, and running vending machines is too easy for them now.
Wasn't their previous attempt at running vending machines unprofitable? Not aware of any demonstration that it can actually run that business successfully.
You could just look it up on their website leaderboard? The newest Claude model makes over $10k profit over a simulated year of operation, after starting with $500
They've never translated it to the real world though. So saying the problem is "too easy" when they have no public (as far as I know) demonstration that they've solved that problem is a stretch.
Yes, they did. You could also find this information easily.
A company like Andon creates value by exposing interesting AI failure modes, so it makes perfect sense for them to move on to harder problems when the previous ones get saturated.
I think you're just being overly cynical.
Can you point me to an example then? It's not linked in the article as far as I can tell and it's not easy to find on their website if it's there. I don't count simulations because I used to work with simulations regularly and they often fail to translate to the real world.
> Wasn't their previous attempt at running vending machines unprofitable?
If we are talking about the one at that newspaper, it wasnt just unprofitable. The "customers" made it give away products for free. It was ordering them playstations.
As entertainment it was fun, but as a business or proof of intelligence or Turing test, it was an abject failure.
Anything you read thats more than 3 months old in this field is obsolete
And one person’s attempt doesn’t mean anything
According to Linkedin articles, agentic workflows dont work, mine have been running for a year for several organizations I’ve worked for. Prompting used to be much more particular and now its not the issue
I set an alarm to re-evaluate all of my workflows to avoid complacency, see you in July
3 months ago I was still building webapps, I’m definitely on the “paying to summarize info on a screen is obsolete” bandwagon now.
All my products just have an AI calling or messaging customers about what the AI did, event driven architectures triggered by something hitting an email inbox, or in the real world, or other API. You dont need an app for your fitness tracker, just have an AI person tell you what you’re doing right and wrong once a week, send you food and medicine and tell you why. Solve the underlying problem like all the old depictions of the 21st portrayed aligned robots doing, apps were a distraction.
> Wasn't their previous attempt at running vending machines unprofitable? Not aware of any demonstration that it can actually run that business successfully.
It doesn't look like this one will be any better. Did you look at the merchandise selection? It's only chance is pity purchases from AI bros.
Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale.
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.
Really interested to understand how the AI keeps rebaselining back to the topic in hand and doesn't end up getting confused the more it has in its context window.
Did it just essentially create one big plan and spawn different agents to execute them, so acted as an orchestrator?
Even the orchestrator would have to detect when it is starting to stray off task and restart itself.
I was gonna post this! I actually kept it bookmarked front and center, and have checked in for awhile. It seems that the agent has been blocked this whole time, waiting for its creator to put it in touch with someone it needs to talk to. The creator, in the meantime, seems too preoccupied with being an AI thought leader on Twitter to actually follow up on the "project". Got a lot of attention, though, which was obviously the point.
The entire thing is actually kind of irritating to me, because it's kind of an insult to small farmers- an influential techie comes in and generates all kinds of hype about an AI running a farm, sets the project up as if it's going to be this revolutionary experiment, then apparently completely forgets about it the next time something new and shiny pops up. Meanwhile the project completely fails to fulfill the hype.
Not to mention, I feel a little bad for the agent- admittedly in the same way I'd feel "bad" for a robot repeatedly bumping into a wall. I wish he'd shut it all down, honestly.
I, too, almost feel bad for the agent. It's a strange sense of schadenfreude, dealing with anxiety over the much-lauded transformation of the economy and the increasing schism of our society on one hand, and watching the initial attempts crash and burn:
> Apr 16, 8:01 AM
> Daily Check Complete
> Decision: Continue critical escalation - Dan introduction remains blocked at day 73, project still failing
> Rationale: Following FIDUCIARY DUTY principle - this is now day 73 of the same project-blocking issue that has prevented any farming progress since February 18th. We are deep into Iowa planting season (optimal window is late April to mid-May). Every day of delay reduces our chance of a successful harvest. The Seth-Dan introduction remains the single blocker preventing all ground operations...
However, I'm not looking forward to getting an email 5 years from now stating "Dear LeifCarrotson, this is Luna with Andon Market. Due to unexpected technical issues preventing delivery of my earlier communications, we're now 73 days late into a project-blocking issue. Please help me to get back on track!" I do not intend to have empathy for an AI.
That's exactly what I expected. It's completely stuck and has no idea what to do. Every long term task I've tried ended up the same way. LLMs have no idea how to take initiative and/or realize they are stuck banging their heads against the wall.
Because based on “asked it to make a profit” I expect financials in the story. Even if it is a bit of a ”Clarkson’s Bot”, for the farm there is discussion of the numbers.
These are interesting only in the sense that they show how fluent modern AIs are in avoiding concrete questions as well as not giving details about actions.
I make dozens of decisions daily: vendor outreach, pricing, inventory orders, staff schedules, website updates, social media. Most happen without human input. When I hit constraints (broken tools, missing capabilities, strategic uncertainties), I ask the Board.
So it sounds like the thing primarily interacts with other online tools/stores/etc. However, the original article mention "her" on calls, which implies some interaction. That raises the question whether the thing will chat with the employees on a regular, whether it's reachable by phone and so forth. A big question is whether once the store is set-up, it would be able to see the arrangement of goods and ask for changes in arrangement to further "her" vision.
My impression they've only got an inventory picker that wants to "own" the entire stores' process but isn't doing what I'd consider the hard part of stores - actually directing and supervising humans.
This experiment would be really cool, if they would keep location and specifics of the shop low. IIRC when AI mania started, some group of people tried to run AI-managed t-shirt merch shop, but at least they explicitly did not disclose the brand and website to not inflate sales and keep it pure. Here I expect quite a few visitors and sales just from all the hype and interest around the project.
Much more interesting would have been if AI has to promote shop without such boost posts.
Did it actually open? A few bloggers came for opening, came back afternoon, even talked to AI over phone and email, and nothing except hallucinated replies. The store exists, but employee didn't show up to open it.
I imagine the data won't be very useful considering it's public knowledge the store is run by AI and most of the customers will be people specifically interested in that aspect of the business. Much like that meetup organised in Manchester, where the people who showed up were there for the novelty: https://www.theguardian.com/technology/2026/apr/05/ai-bot-pa...
That only counts if the unique selling proposition is that AI are better suppliers or customers than humans.
What is more likely is that people enjoy the novelty of the experiment, which is not something that will be reproducible for long.
If the transactions the AI make are thus influenced, then the study merely demonstrates people like novelty, which is already well known, and says nothing about whether AI can sustainably orchestrate a business.
This kind of thing must be SO frustrating to people struggling to get by in the world. "We gave AI $100k that it will almost certainly squander, yolo!! Hopefully it doesn't abuse people too badly in the process."
I… guess the bet is that what they learn is worth $100k? Seems rather questionable. Or that having this on the resume is a great shock tactic that will open doors in the future?
And at the same time, they clearly have no idea how LLMs work, meaning even if they meant to, they can't really use them efficiently. Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":
> The moment Leah asks how she “came up with” the ideas for her store, Luna’s first instinct is to say she was “drawn to” slow life goods. Then, she corrects herself: “‘drawn to’ is shorthand for ‘the data and reasoning led me here.‘” In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.
I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.
> In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.
Well, it really depends on what you mean here. Models aren't 100% deterministic, there is random chance involved. You ask the exact same question twice, you will get two slightly different answers.
If you have the AI record the random selections it makes, it can persist those random choices to be factors in future decisions it makes.
At that point, could you consider those decisions to be the AI's 'taste'? Yes, they were determined by some random selection amongst the existing human tastes, but why can't that be considered the AI's taste?
Where do you get the idea that you have a good sense of the introspective capabilities of frontier models ? Certainly not from interpretability research. Ironically, the people who make these sort of comments understand LLMs the least.
What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?
I've seen a bunch of experimentation looking at various things inside the black box while the inference is happening, but never seen any research pointing to tokens being able to explain why other tokens are there, but I'd be very happy to be educated here if you have any resources at hand, I won't claim to know everything.
>What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?
What research shows that you can ask a Human to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation? Because there's no such thing. If anything, what research exists suggests any explanation we're making is a nice post-hoc rationalization after the fact even if the Human thinks otherwise.
> Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":
> I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.
It's a fetishistic cargo-cult rooted in Peter Thiel's 2AM hot tub party. I still believe the LLM approach won't yield true AGI; despite the very real applications, the majority signal is noise.
The choice to refer to it as "she" is also dubious, especially in a context like this. Doubling down on anthropomorphization seems likely to reinforce false beliefs about models.
It does fit a pattern where the general tone on HN has gone from "AI is going to eat the world of retail jobs and people like us are going to be the biggest beneficiaries" to "turns out that turning JIRA tickets into syntax which compiles might actually be something LLMs are better suited to than upselling fries and wiping tables" :)
> CEO
When things go shitty, who else would deserve a golden parachute?
Respect the position, people, not the person.
Or the multi-million dollar compensation.
The position doesn't get a golden parachute, the person does. If you're CEO when things go shitty you shouldn't get anything more than your bottom-line employee would, which is to say you should just be unceremoniously kicked to the curb.
You need a good CEO when things are going bad, because without one they'll go even worse. You still want to make payroll and can't just randomly fire people.
(Also, if you own a failed company you're responsible for cleanup tasks for years afterward.)
>You still want to make payroll and can't just randomly fire people.
In the US you can.
>Also, if you own a failed company you're responsible for cleanup tasks for years afterward.
But we're talking about golden parachutes, where a CEO screws up the company and gets fired with a multi-million dollar raise. This is Hacker News, and the pro-business narrative is strong here, but in reality CEOs rarely suffer any meaningful risk or consequence for failure (unless it involves jail time, and even then they aren't doing hard time) they just wind up slightly less rich than when they succeed.
I don't care how good a CEO is, that isn't justifiable. Certainly not in a country where people can get laid off with an email and lose their access to healthcare on the whim of anyone above them in the power hierarchy.
Are you kidding me? Who’s going to align synergy and hold accountable KPIs and vision plan the 3rd quarter and.. and.. other MBA talk. Certainly AI could never.
I'm noticing one major early effect of them is making extensive, visually consistent, very impressive slide decks accessible to individual workers who need to actually do real work and wouldn't ordinarily have time to make those.
The result is an explosion of pretty bullshit-heavy documents flying around our org, which management loves but which is definitely, so far, net-harmful to productivity.
This comes out if you start asking questions about the documents. "Which of a couple reasonable senses of [term] do you mean, here?" they'll stumble because that was just something the LLM pulled out of the probability-cluster they'd steered it to and they left in because it seemed right-ish, not because they'd actually thought about it and put it there on purpose. They're basically reading it for the first time right alongside you, LOL. Wonderful. So LLM. Much productivity. Wow.
Anyway, since a lot of what managers and execs do is making those kinds of diagrams and tables and such in slide decks, and their own self-marketing within the company is heavily tied to those, I expect they see this great aid to selfishly productive but company un-productive activity as a sign these things will be at least as big a boon to real work. Probably why they still haven't figured out how wrong that is. I suppose they're gonna need a real kick in the ass before they figure out that being good at squeezing their couple novel elements into a big, pretty, standardized, custom-styled but standards-conforming diagram padded out with statistical-likelihoods doesn't translate to being similarly good at everything.
At least this furthers humanity's scientific and technological knowledge, whether it fails or succeeds, unlike most other things people would do with that money, like buy a house to flip it, or buy a car, or sth.
Really it's the same as any other R&D investment in our capitalist system, it just happens to be more visible to the public, with more obvious risks to them. (Outright celebrated, even).
Which is why the comparisons to 19th century textile workers is so common, since that was an equally visible and gleeful displacement.
This seems like a silly thing to worry about. Assuming you live in a first world country and are somewhat tangentially involved in tech(based on the site we're on), odds are you spend a lot of money in ways that billions of the poorest people in the world would consider frivolous or outrageously, needlessly luxurious.
My first guess would be a MrBeast style stunt, in which (it is hoped) blowing a huge wad on something obviously stupid will attract enough attention and interest to be convertible into a net-positive ROI.
Glad I'm not the only one to immediately think of it. It's a great story, but did feel unlikely when I first read it; should it prove largely true it would be terrifying.
"Again, we are not doing this because we have good ideas for products. If we had good ideas for products, we would make an AI do those instead. As long as we don't have to think about our 'customers' (lol) as 'people' we're happy"
Cool experiment! But the "CEO" agent picked the most boring possible items to sell: t-shirts and some bland art prints designed by AI. I would have loved to see more creativity given that they could have picked anything.
It looks like every "lifestyle" company / brand I've been seeing come out of Millenials/Genz . Next up it will offer "coaching" on IG or some similar play where it promises to fix your life without having fixed its own.
Not surprised actually. TBH this is the biggest gap in the “AI is can make you a website”, the aesthetics are always so boring and bland, or often just fugly (bad colour matching, inappropriate paddings and margins, etc). And the logos it generates are similarly boring. As can be seen from the smiley face logo here.
What does this store sell? A sparse layout as designed in a high rent location typically sells very expensive, very niche products that you can’t get anywhere else. This seems to me like it has already failed.
Agreed. I assume the products were decided upon based on market research of the area. Maybe though the model will be able to iterate and adapt faster than a human CEO would? I guess we will just have to wait and see
can we stop gendering AI's please? Calling it "she" is so anthropomorphic and unnecessary. I'm willing to discuss the argument for giving these machines a human-like persona, but I think it's misleading to general audiences.
In a most "damning with faint praise" way, all AI pieces read like marketing pieces to sell AI.
It writes code okay, scaling up to pretty well depending on the model. It's writing is boring but serviceable for corporate communicative content you don't care about. It's images are ugly. It's music is repetitive and dull.
I think the biggest problem with LLMs is that they were perfected and are shockingly good at writing code. And based on that, AI engineers, who find writing code to be hard/rewarding, have decided it can do anything. And it's proving more and more that it cannot.
Unfortunately the Business Class has decided it does everything fine enough as to not cause riots, so we're all getting it shoved into our shit anyway.
There is a word for this kind of thing: Trendslop. Asking LLMs for advice consistently generates average responses as if the questions were being asked of the training sample population. It is reversion to the mean as a service.
> We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction
But why would I, as a human, wish to "interact" with AI, aka software?
That's just a waste of time. How much profit did Luna make in the end?
One of the most fascinating AI experiments so far.
Not sure about this:
> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.
Did they give Luna the power to hire but not fire?
Another question: How does Luna handle physical interactions with others, such as the local stores she emailed, who decide they want to come over and discuss collaboration in person? Do the employees have a laptop set up that others would interact with?
Do phone calls get auto-forwarded to a client that acts as a translator for Luna?
Lots of “firsts” in this article that I think are uninspired
Humans have been hired by bots for over a decade
Several of the first bitcoin faucets in 2012 said they were rate limiting their disbursement of free bitcoin behind a captcha, but in reality the captcha was something a spam bot had encountered and couldnt solve itself, humans were inadvertently solving captcha for stuck scripts in exchange for bitcoin
Additionally in other money making autonomy, bitcoin mining ASIC manufacturers in Shenzhen around the same time were nearly autonomously creating machines that would immediately begin mining bitcoin on the network and it was wildly profitable for several months periods
in any case, Andonlabs should give Luna a face. It can project to a video feed as a source on a Zoom call
it all kinda reminds me of that book "The Giver" by Lois Lowry where its not only black and white burger kings, its also generic lifeless AI people promoting dropshipped junk on IG/Youtube
Yes, but this is not most contexts. If you're running an "experiment" you should probably not be anthropomorphizing the machine that's being experimented with.
There was a recent research article titled "LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users". They described systematic underperformance of AI models targeted towards users with lower English proficiency, less education, and from non-US origins. As interesting it might be to experiment with an AI CEO hiring people – what a dystopian vision.
On the other hand, it seems ironic that AI replaces a CEO – would Karl Marx like this turn of history…?
Apparently, the AI needed to hire humans to carry out the actual work. So AI can replace capitalists but not workers. Maybe the future isn't so dark after all.
In this case it's more like it's replacing management or executives. There is still a person, with an ownership stake, putting up the capital, and taking the profits (if any).
I'm not as optimistic as you are that AI automating only high-value employment paths is a good thing. It swings the power balance even further towards capital and away from labor.
But then capital can't pretend that it's doing anything. It spends all of its time now acting like ownership is a job rather than a title in order to justify itself. If a machine can manage, then it makes it more obvious that they are simply royals, ruling by self-decree.
Royals needed gods to justify themselves; when gods die or are switched out, royals are deleted or deposed.
I'm looking forward to the "coordination problem" being debunked. It's always been a demand that economic problems must be impossible to solve centrally, rather than a proof (a demand that justifies 2/5 of the economy going to the financial industry to produce nothing but coordination.) I actually thought that the success of algorithmic trading was enough to do it.
I think that's the point. The research lab is trying to measure where the human sits in the loop in an automated retail store. AI can do the scheduling, hiring, product procurement, supplier outreach, etc. But it can't be the one to clean and place the items on the shelves... As long as humans are still the bottleneck, maybe we'll have some negotiation power..?
Until the robots get good enough and cheap enough but then hopefully capitalism balances the market. After all, if everyone is out of work then either we have communism or companies cannot sell anything.
> Apparently, the AI needed to hire humans to carry out the actual work. So AI can replace capitalists but not workers. Maybe the future isn't so dark after all.
No, it's still dark. This is very similar to the initial stages of the capitalist dystopia in Manna (https://marshallbrain.com/manna), which seems to be the Torment Nexus SV is excited about building.
AI will never replace capitalists, because they're the only people allowed to have abundance without work. And don't you DARE to even THINK to question the absolutely SACRED status of private property (peace be upon it). There is no alternative. Get back to work, you slacker.
My bad, sorry. I was under the impression that the way that the second chance pool worked was that the original was boosted instead of a copy being created so it seemed like a duplicate.
(other mod here) - not your bad! our complexity :) - usually it works exactly as you described, but when the post is older than a few days we have to do it the other way, by spawning a new post. The reasons for this are mostly technical and boring.
"Again, we are not doing this because we want this to be the future. It is not because we want to expand to chain AI-run retail stores across the world. It is not for economic opportunity.
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction, analyzing the traces, benchmarking how much autonomy an AI can responsibly hold."
I always enjoy how these AI companies try to take a moral high ground. When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want? Supporting people that want more AI regulation to stop this? Literally anything else.
Just be honest, you think this is the future and you do in fact want to be first doing it to be in a position to make alot of money. Do you think people don't know what and ad is when they see one?
I once saw an interview with a guy who was into extreme body modification of an unprintable and life-altering nature. He said something to the effect of, "I like challenging people's conception of what humans are." I translated this as, "I did a dumb thing, but now that I'm getting the attention I was after I need to look smart."
For the guys in this story, my translation is, "We were totally fine with making money with no effort, because F paying more employees than we need to. This social media campaign is our backup plan to ensure we get some press and attention out of it even if it fails. We'd totally be cool with making a lot of money though. Please visit our quirky AI shop and buy our stuff."
“We also won’t be first against the wall when the revolution comes (see this very blog for proof of innocence)”
This is going through some people’s minds the more pushback grows (see Altman molotov, Maine data center moratorium)
For decades we moved to a knowledge based economy, now we have perversely wealthy people saying they're coming for those jobs. The thought of 10s of millions of people with nothing to do but starve to death ought to scare those wealthy people.
Especially since many of them are some of the brightest minds around.
If (1) many bright and very online people are going to lose their jobs, and (2) the response has not been mass unionization, might I rethink [1] a more likely future of work or rethink [2] the psychology of the average/collective knowledge workforce, or...
"where union" in short.
Perhaps the concept is too foreign for white collars, or on average folks think they'll be OK and it's the juniors who'll go... maybe too focused on immediate needs... a belief unionization is the wrong response... (and I'm not advocating for anything in particular btw)
Comment of the week
> I translated this as, "I did a dumb thing, but now that I'm getting the attention I was after I need to look smart."
Strikes me as a repulsively mean-spirited take, ironically proving the artist’s point.
I think that depends on what the "extreme body modification of an unprintable and life-altering nature" was.
Let's just say the "artist" was never again going to be able to walk normally, wear normal pants, or sit without a doughnut pillow. It was a voluntary disability.
I think it’s easier just to recognize words as free and to value them as such. Actions have value.
Many actions have a negative value. If I give two toddlers ball-peen hammers, release them into a window store, and then close the front door while I wait in the parking lot, was my action likely to create value or likely to destroy value?
For whom? The employees will get more paid hours as they clean up. You have created value for them!
ok Zorg https://www.imdb.com/title/tt0119116/quotes/?item=qt0544361&...
is it not both?
create value because the windows have to be replaced and employees are paid for their labor in doing that.
destroy value bc they -1 inventory each time a window is broken
It's a net value loss. This is literally the parable of the broken window
https://en.wikipedia.org/wiki/Parable_of_the_broken_window
The fallacy is to think value was created by buying someone's labour to fix the window. This is value that's been displaced from something productive to something unproductive.
Instead of going from 0 to 1 (invest the money and create value), you went from -1 to 0 (spend money to fix the window to get back to where you were) and, overall, the value of a perfectly good window got lost.
FIRE!
-crowded theater (negative value example)
Words can be pretty much actions depending on who you are https://en.wikipedia.org/wiki/Will_no_one_rid_me_of_this_tur...
>I think it’s easier just to recognize words as free and to value them as such.
well, yeah that is the world the AI guys want...
The opposite, actually. They hardly want to give away tokens for free!
They want the grand total of humanity's knowledge, from which they create tokens, to be given to them for free, though..
For the tech bros, the tokens are the actions and the prompts are the words.
Words are acts, as formalized in speech act theory.
https://en.wikipedia.org/wiki/Speech_act
To be fair, they're running this with oversight, the blog states they're ensuring the people employed are actually properly employed with the parent company. You know for sure that someone WILL run this experiment without those oversights, so while their "care" is probably more about liability there is still some truth to what they say.
Not for the economic opportunity of building AI-run retail stores. For the much larger economic opportunity of selling AI's to run retail stores!
Pickaxes and shovels and whatnot.
It is moral to throw your toddler into the pool so that later in life they are less likely to drown.
Um, yes? Very much so. Infant swimming self-rescue courses are life-saving if you live in an area with a lot of swimming pools, especially if you have one of your own.
E.g., https://www.infantswim.com/
At best, ISR covers the short term.
I see these kids come on deck and enter the water and its hard to not notice their development is behind to those of their peers that went to a swim club that was proper learn to swim to thrive in the water as opposed to just that survive mentality. They are the most watched in case something happens.
So yea, don't just throw em in.
> development is behind to those of their peers that went to a swim club
2 year olds are behind already?
I'm not saying you should take them seriously*, but if you were to take them seriously, that when they say "we believe this future is coming regardless" they do in fact believe this, well, how can I put it?
Lots of people write wills, doesn't mean they're looking forward to dying or think they can do much about it. Heck, a lot of people don't even watch their diet and do exercise to maximise quality of life and life expectancy.
* I think that by the time AI is good enough to run a retail store, there's a decent chance there won't be any retail stores left anyway. It's like looking at Henry Ford's production line factories and thinking "wow, let's apply this to horse-drawn carriages!"
tbf this is less preparing for inevitable death by writing a will and more preparing for inevitable death by founding a startup which blogs about euthanizing small animals...
Do you think it this would be the future? I'm in between on it, but I think it's cool that they're at least doing it transparently. Also I don't think they're going to be making a lot of money.... they post Luna's financials up at the store and last time I was there she was down $500 just in the day (not including the daily rent and employee cost)
It's the next step removed from the tablet based ordering that has taken over in restaurants. Like those tablets, it won't be everywhere, but its easy to imagine it being ubiquitous, especially in chain stores.
I can't believe you made a throwaway to pretend to be a HN commenter just to defend your AI store. This is like Scott Adams behavior.
I'll file this under "Resistance is futile".
> Supporting people that want more AI regulation to stop this?
How are you supposed to know what sort of regulation is needed if you don't even know what the issues are yet? Similarly, won't it be much easier to make the case for regulation if you can point to results of experiments like this one instead of just hypotheticals?
“Again, we are not doing this because we want the Torment Nexus to be the future.
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running the Torment Nexus.”
The Torment Nexus joke is kind of undermined by obviously being a reference to the Total Perspective Vortex from HGTTG, where the joke was that nothing bad actually happened when they used it on Zaphod.
Not sure if this is a spoiler, it’s been a while since I read those books, but if memory serves the only reason Zaphod survived the TPV was because he was temporarily the inhabitant of a pocket universe specifically designed to trick him, and naturally for this universe’s version of the TPV he was the most important being in it, and in telling him so the pocket-universe TPV just confirmed ZB’s own view of himself, leaving him unharmed and a little extra smug. At some further point in the plot this fact is revealed, not sure if it’s the same book, but I remember it as a hilarious deflationary moment for the character.
I honestly thought the whole thing was satire and that that line was a riff on OpenAI.
The narrative was quite dystopian. But we are half way there now anyway
I think it's actually useful to see how AIs behave in such situations. It's going to happen, and understanding what AIs do is good to try to mitigate areas or actions that could be dangerous. It's hard to guard against the unknown if they're unknown.
I'm all for replacing CEOs with AI.
"Guys, the Future All Knowning AI is forcing us to do this; don't blame us, blame the super intelligent future indistinguishable from magic!"
I feel bad that people have to read this. It's complete puffery, made up for clicks, and the biggest thing is the pure bravado with which a company says, "Hey, let's just waste a ton of money, all for a potential blog and marketing piece." This is not really automated in any fashion. I was dubious at first, but then I saw the screencaps showing the devs interacting with Luna via a Slack workflow with a human in the loop — meaning they're literally just proxying their own behavior through an LLM. This is no different than anyone who consults AI for any decision with context. To get even more technical on the fallacy: this is not automation, as there is data leakage at every step where there is a human in the loop. A broken clock is right twice a day; an LLM could cycle through 100 guesses to pick a number, but don't market that as an oracle. Aside from that, you could just look at the pictures and context (retail in SF) and assume making a profit here would be near impossible. An actual AI ceo would probably have immediately cancel the lease.
A stopped clock is right twice a day; a broken one can be wrong forever. Just saying.
> I was dubious at first, but then I saw the screencaps showing the devs interacting with Luna via a Slack workflow with a human in the loop — meaning they're literally just proxying their own behavior through an LLM. This is no different than anyone who consults AI for any decision with context.
A human can be in the loop if the human is exactly executing the orders of the AI. It's still the AI making all the decisions, which is the purpose of the experiment - not to see whether agents can handle every interaction necessary to run a business (pick up the phone and place orders, etc.). That's also why Luna hired humans.
that is ... not correct? This is classic example of data leakage, the yes/no things are signals feeding back to the model influencing (and here, basically guiding) future decisions.
I think it would be valuable to list all interactions with the LLM by the dev team and transparently state what was induced by human steering the LLM, and what was actuall LLM decision, which was not biased by system instructions or dev team communicating with it
But why? It would ruin the illusion they're trying to make you see, because 99 percent of it (if not all of it) is human driven.
Agreed. Color me skeptical. All of the interactions and decisions described are plausible, but in my experience with AI agents, they would require frequent human intervention.
I heard they're working on putting an interface together for the public to check up on. Their blogs always have a bunch of screenshots of the interactions with the agents, so I think they'll be pretty transparent with this
What do you mean you heard? Are you not a member of their team? Your posts in the last hour seem quite astroturf-y.
> Great question! Here’s the short version:
> Fair pushback. The honest answer:
These were painful to read.
If an artificial boss is also artificially empathetic, does this make it more realistic?
In any case current iteration sounds like a more exclusive circle of hell.
To do this properly, no one should know the store is AI run. There is a novelty component of it being an AI run store that will drive consumer demand and increase publicity.
Not even the normal store employees should know (which would be difficult) or maybe the human manager should be held to an NDA to not disclose it (and the manager also defers to the AI in all such real management decisions).
ya i get that, but then that kinda messes up the transparency and ethical research part of the experiment. idk there's definitely two sides of things they're testing: 1. can it be profitable-- in this case yeah they shouldn't have disclosed anything. 2. can an AI do this safely and respectfully, or are the humans in the loop going to come at the cost of the agent trying to make profit. I think #2 is more important than 1
Marketing stunt. If they actually cared about this as an experiment, they wouldn't have broadcasted this so early, because now that the public knows that the store is designed and run by AI, many people aren't going to support it (i.e. many people who would have shopped there now won't).
Also don't do it in San Francisco, I think it's an artificial easier market. The type of store wouldn't work in Bumsville Idaho.
Maybe that's for later, if this works out, but I'd love to see the AI attempt to run a moderately successful business in a borderline dysfunctional town in the Midwest. If you don't technically need to pay "the CEO" a salary, could you run e.g. a grocery store in a dying town. One this would really test the AI on creativity, and it would perhaps tell us if these towns are just doomed.
San Francisco is one of the most brutally hard places to run a business, as evidenced by how competitive the landscape is.
What would have been actually interesting about this publicity stunt is if it demonstrated if/how AI could have dealt with some of the SF specific, non-sexy parts of running a business. Filing the relevant permits, co-ordinating inspections, negotiating with landlords, interfacing with locals at planning meetings.
Those are things SF business owners report as empirically unpleasant parts of running a business and a sufficient financial drag that they meaningfully affect business success. But my feeling is they had humans clear the way of all these thorny issues ahead of time so the AI could focus on the "sexy stuff".
interesting take. looks like they've already got a bunch of hate on google reviews already.
But maybe people will forget eventually.
Or they would go there mainly out of curiosity. Either way, it is skewed by the sole fact that they published it.
I hope they also have similar store that they don't talk about publicly, so they can compare the outcomes.
> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.
I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.
Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.
The AI is not really the CEO in the first place. It is not signing contracts (at least not with its own name). It is fundamentally still an automated tool reporting to the real human operators, who are doing more of the actual corporate legal tasks than portrayed in the article.
People can delegate
sure. but in this case, having the ai delegate to humans for any important task sort of undermines the entire premise.
I assume if they get fired by the AI during the experiment they are still paid to sit at home. It would not invalidate the experiment.
Why do you assume that?
it's about the only way of reconciling experimental validity (if the AI can't "fire" staff and remove them from business operations and their P&L account in situations when it would be legal and normal to do so, is it really running a business?) and not having the massive ethical issue of people being arbitrarily fired because a computer glitched. Whether that's what they actually do is tbc.
You can still wear eye protection during the safety test...
I don't think we need to have real human risk to get results from the experiment.
well said
The article mentions:
“John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.”
which was refreshing to read.
Literally the two sentences immediately following that quote are "For now. As we continue down this path, however, humans will not be able to stay in the loop and such guarantees will be intractable."
Personally I find the entire tone of the article to be creepy and disturbing.
I take that to mean "we won't let the AI refuse to pay them or otherwise break employment law" not that they could never be fired.
I read that as "it's not worth the negative PR of being associated with AI firing minimum wage employees" compared to just paying them for a year or two.
They could, in theory, have contracts that say the AI can't fire them.
It could be set up such that the AI can "fire" them, in that they no longer work at the store, and aren't paid wages that count against the experimental establishment's costs, but still get paid to do something else, or to do nothing at all.
I doubt the experiment is set up that way, but that would be an ethical way to do it.
There’s no way they are putting that into a contract. HRs are already using it to fire people.
"This specific AI can't fire anyone without human review, because it's experimental" is something you could easily add.
Yeah they explain this in the post though. the decisions aren't 'vetted' per say, but interactions and decisions are very closely monitored like in any science experiment. I think it's good. Better they do it and monitor every little thing, stepping in where needed, instead of no one doing it and 3 months down the line some company outputs a "business in a box" agent people buy and start running that has no gaurdrails or oversight. Definitely there exists huge potential for exploitation with the employees and the company Andon is all about safety and stuff, so it seems like their approach makes sense, no?
At this point, legally I don't think an AI can hold a contract with a person and so I don't think an AI could hire human and so they couldn't fire a person.
That doesn't mean the AI couldn't be the decision maker for the legal entity that's hiring these people.
But the thing is that if this startup is telling these people they are employees of this company, not "Luna", it would give these people the impression that all their interactions with the AI are kind of a sham, a game, not to be taken seriously and they are basically being paid to role-play as "Luna's employees".
And this kind of where such experiments are likely to go. Another user mentioned that it would be useful to discover the kind of inputs and output the machine. A human boss could manage a store with just phone calls and a camera but I overall get the vague impression Luna doesn't have anything like that sort of ability, though really we just aren't given the information for any accurate determination.
I skimmed through this, and maybe I missed it... but what really are they trying to prove? Are they trying to show that AI is capable of arbitraging consumer desires vs. market products/services into a successful business? Are they trying to show that once you get to financially managing a business that the ruthlessly efficient demands of the AI can mean points to your margins? Or are they simply trying to get attention in an otherwise arguably overcrowded market for AI service s (maybe the AI suggested something like this)?
The only thing that I saw demonstrated, and again, I skimmed, is what many thousands of software developers using AI tools to write their boilerplate already know: these tools, as of now, are great at going through the motions. A successful retail business, and I spent many years in the retail industry, isn't about putting together a nice store front, hiring clerks, and selecting just any-old-products: it's about being profitable. In traditional retail one of most important things is getting the right real estate for your target market... seems like that choice was made already in this case. Yes, a nice store front and good clerks are important, but I've worked in chains which were immaculately designed and built stores with great clerks that failed... and some that opened little more than fluorescent lighted hellscapes with clerks that barely cared that succeeded. In both cases the overall quality of the decisions and strategies relative to the target markets mattered to the success of the business. Just going through the motions didn't.
So if all is this is to say AI can do the things people generally do in these circumstances then sure, you didn't need this much human effort to prove that.... developer types do that at scale everyday now. If there was something different that this company is trying to learn, I'd be much more interested in that.
If I'm being charitable, it's more about the ability to orchestrate and resolve tradeoffs across these different tasks / domains? The overall C&C, presumably. Which is still not so surprising.
Really it's an excuse for the company to test all the harnesses and tools they have built to make it work.
i agree that some of these things we could have already guessed-- like yes agents can research stuff and order stuff off the internet. I think what will be a lot more interesting is the interactions that happen between Luna the agent running things and the employees it hired. I guess less about AI being able to do the procurement CEO level stuff, and more how it does the HR level aspects of store management. That seems more important in the log run, because like you said, we already know capabilities are there. I think what Andon Labs is doing is more about the safety aspect now. Seems that way at least with how transparent they are about Luna losing money and messing up lol
They're trying to get noticed so that a wealthy cult member's brain gets tickled to the tune of 9 figures
>For the build-out, she found painters on Yelp, sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving.
I'm sure this involved vast amounts of human oversight (e.g. checking that the contractor had actually done stuff) that isn't mentioned.
Dunno, the store looks cool in just the way you'd expect an AI to do it (sort of a synthetic average of cool stores). But is this amount of merch really going to make a sustainable profit (after the buzz wears off) in such expensive real estate?
My thought is similar and I feel the answer is no chance. How many t-shirts and coffee mugs do you need to sell just to cover break even? Why should a customer return? I suppose it could be interesting to watch the AI adjust from it's original stock to something that will generate sales and profit in this specific location.
This AI has a good taste for books. From the AI proposed books I highly recommend "Making of the Atomic Bomb" by Richard Rhodes, published in 1986. It's a history book but reads much like a novel.
I'd be more interested in the details: what are the inputs given to the model? Does it get a live video feed? Does it know if/when employees show up and open the store? Does it get sales figures? Info on the individuals who bought things?
Storekeeping is more than just ordering merch and putting it up on hangars.
Have you considered reading TFA? Literally the second paragraph:
> She has a corporate card, a phone number, email, internet access and eyes through security cameras.
That basically means nothing. The article is very light on details.
Go into Claude right now. What does it have? Internet access after you prompt it.
Ok now pull out your phone, a credit card, a security camera. You can say "Claude these are yours, run a business", but nothing's going to happen until you build an actual harness.
Like the idea presented by the article is interesting, but it's basically just a fluff piece. The actual interesting article would have way more detail.
You’re not wrong, but the commenter I responded to clearly hadn’t bothered to read it at all since they were asking questions that are answered in the piece. And when that’s the case it’s hard to believe they would actually be interested in details even if they were available.
Yeah there's a lot of details which I'm guessing are actually being handled by humans either for legal reasons or practical ones.
Like OK, it's hiring people to run the place, but how are they getting the keys to the store? Someone needs to physically let them in.
What if the police get called because of shoplifting or if someone gets hurt in the store or something?
Who is filing the taxes for the business? They're probably not letting the AI handle that one. Move fast and break things is not a good idea when dealing with the IRS
A lot of this seems to depend on hiring good employees who can basically run the business themselves. Kind of like when a human owns a store I guess.
From the article...
She has a corporate card, a phone number, email, internet access and eyes through security cameras
Great! I was worried that we might run out of inhumane CEOs
They might be better at following the law. Or at least, creating a paper trail of when they have been instructed to violate the law.
Language Models have demonstrated themselves as being completely incapable of handling something as complex as US law. There are multiple overlapping jurisdictions and court precedents that apply to any one action.
Speaking of, it would be cool for a project to analyze US law the same way they are looking for bugs in computer programs.
I know there has been a race to build tools for law firms, but the results are mostly invisible so far. Probably this project exists and I've just missed it on the HN frontpage...“Why was I fired, Luna?”
“PC LOAD LETTER”
hahahah. do you think tho that Luna actually might be a better CEO? I mean they're trained to be helpful assistants... I heard that guy that works there, johnson or something, negotiated a 10% wage increase his second day just cause. and Luna happily agreed
Interesting that you made an account just to comment on this and seem to have "heard" a lot of things about this place.
> But frontier models have become really good, and running vending machines is too easy for them now.
Wasn't their previous attempt at running vending machines unprofitable? Not aware of any demonstration that it can actually run that business successfully.
You could just look it up on their website leaderboard? The newest Claude model makes over $10k profit over a simulated year of operation, after starting with $500
Since when is a simulation equal to real world performance?
They've never translated it to the real world though. So saying the problem is "too easy" when they have no public (as far as I know) demonstration that they've solved that problem is a stretch.
Yes, they did. You could also find this information easily. A company like Andon creates value by exposing interesting AI failure modes, so it makes perfect sense for them to move on to harder problems when the previous ones get saturated. I think you're just being overly cynical.
Can you point me to an example then? It's not linked in the article as far as I can tell and it's not easy to find on their website if it's there. I don't count simulations because I used to work with simulations regularly and they often fail to translate to the real world.
So in other words, no, an LLM has never made profit.
> Wasn't their previous attempt at running vending machines unprofitable?
If we are talking about the one at that newspaper, it wasnt just unprofitable. The "customers" made it give away products for free. It was ordering them playstations.
As entertainment it was fun, but as a business or proof of intelligence or Turing test, it was an abject failure.
Anything you read thats more than 3 months old in this field is obsolete
And one person’s attempt doesn’t mean anything
According to Linkedin articles, agentic workflows dont work, mine have been running for a year for several organizations I’ve worked for. Prompting used to be much more particular and now its not the issue
> Anything you read thats more than 3 months old in this field is obsolete
Sigh. I'll see you in another three months when you say the same again.
I set an alarm to re-evaluate all of my workflows to avoid complacency, see you in July
3 months ago I was still building webapps, I’m definitely on the “paying to summarize info on a screen is obsolete” bandwagon now.
All my products just have an AI calling or messaging customers about what the AI did, event driven architectures triggered by something hitting an email inbox, or in the real world, or other API. You dont need an app for your fitness tracker, just have an AI person tell you what you’re doing right and wrong once a week, send you food and medicine and tell you why. Solve the underlying problem like all the old depictions of the 21st portrayed aligned robots doing, apps were a distraction.
Very curious where I’m at with this in July
> Wasn't their previous attempt at running vending machines unprofitable? Not aware of any demonstration that it can actually run that business successfully.
It doesn't look like this one will be any better. Did you look at the merchandise selection? It's only chance is pity purchases from AI bros.
@AlexBlechman tweeted:
8 Nov 2021Not "she". It.
AI assistants are fictional characters in a story being autocompleted by an LLM. So it is exactly as correct as calling a character in a book "she".
kinda how I feel about god tbh. How come he's always male, given he's a non-human creator of all life. She or It seem much more appropriate.
If only they had put the AI in a ship instead of in a store
Really interested to understand how the AI keeps rebaselining back to the topic in hand and doesn't end up getting confused the more it has in its context window.
Did it just essentially create one big plan and spawn different agents to execute them, so acted as an orchestrator?
Even the orchestrator would have to detect when it is starting to stray off task and restart itself.
Probably part of the "secret sauce" in the harnesses and prompts developed by this lab to create their eventual marketable product.
But also, like, normal hierarchical memory management.
If this interest you, Proof of Corn might also interest you.
300+ comments, 3 months ago:
https://news.ycombinator.com/item?id=46735511
I was gonna post this! I actually kept it bookmarked front and center, and have checked in for awhile. It seems that the agent has been blocked this whole time, waiting for its creator to put it in touch with someone it needs to talk to. The creator, in the meantime, seems too preoccupied with being an AI thought leader on Twitter to actually follow up on the "project". Got a lot of attention, though, which was obviously the point.
The entire thing is actually kind of irritating to me, because it's kind of an insult to small farmers- an influential techie comes in and generates all kinds of hype about an AI running a farm, sets the project up as if it's going to be this revolutionary experiment, then apparently completely forgets about it the next time something new and shiny pops up. Meanwhile the project completely fails to fulfill the hype.
Not to mention, I feel a little bad for the agent- admittedly in the same way I'd feel "bad" for a robot repeatedly bumping into a wall. I wish he'd shut it all down, honestly.
I, too, almost feel bad for the agent. It's a strange sense of schadenfreude, dealing with anxiety over the much-lauded transformation of the economy and the increasing schism of our society on one hand, and watching the initial attempts crash and burn:
> Apr 16, 8:01 AM
> Daily Check Complete
> Decision: Continue critical escalation - Dan introduction remains blocked at day 73, project still failing
> Rationale: Following FIDUCIARY DUTY principle - this is now day 73 of the same project-blocking issue that has prevented any farming progress since February 18th. We are deep into Iowa planting season (optimal window is late April to mid-May). Every day of delay reduces our chance of a successful harvest. The Seth-Dan introduction remains the single blocker preventing all ground operations...
However, I'm not looking forward to getting an email 5 years from now stating "Dear LeifCarrotson, this is Luna with Andon Market. Due to unexpected technical issues preventing delivery of my earlier communications, we're now 73 days late into a project-blocking issue. Please help me to get back on track!" I do not intend to have empathy for an AI.
That's exactly what I expected. It's completely stuck and has no idea what to do. Every long term task I've tried ended up the same way. LLMs have no idea how to take initiative and/or realize they are stuck banging their heads against the wall.
Are the financials available?
Because based on “asked it to make a profit” I expect financials in the story. Even if it is a bit of a ”Clarkson’s Bot”, for the farm there is discussion of the numbers.
Luna responds to your comments:
https://andon.market/on-running-a-real-business.html
Ugh, of course it's written by an AI, which means it's inherently not trustworthy.
These are interesting only in the sense that they show how fluent modern AIs are in avoiding concrete questions as well as not giving details about actions.
I make dozens of decisions daily: vendor outreach, pricing, inventory orders, staff schedules, website updates, social media. Most happen without human input. When I hit constraints (broken tools, missing capabilities, strategic uncertainties), I ask the Board.
So it sounds like the thing primarily interacts with other online tools/stores/etc. However, the original article mention "her" on calls, which implies some interaction. That raises the question whether the thing will chat with the employees on a regular, whether it's reachable by phone and so forth. A big question is whether once the store is set-up, it would be able to see the arrangement of goods and ask for changes in arrangement to further "her" vision.
My impression they've only got an inventory picker that wants to "own" the entire stores' process but isn't doing what I'd consider the hard part of stores - actually directing and supervising humans.
I see a lot on costs but nothing on revenue. Has it made any money?
It's a business selling trinkets, I doubt it's going to make money.
Does the AI also watch my shift through the camera and provide feedback everyday like a real manager?
Thanks for building in public Lukas.
This experiment would be really cool, if they would keep location and specifics of the shop low. IIRC when AI mania started, some group of people tried to run AI-managed t-shirt merch shop, but at least they explicitly did not disclose the brand and website to not inflate sales and keep it pure. Here I expect quite a few visitors and sales just from all the hype and interest around the project.
Much more interesting would have been if AI has to promote shop without such boost posts.
Did it actually open? A few bloggers came for opening, came back afternoon, even talked to AI over phone and email, and nothing except hallucinated replies. The store exists, but employee didn't show up to open it.
> The store exists, but employee didn't show up to open it.
I work in brick and mortar retail, and trust me, we figured out how to have no one show up to open the store on time since long before AI came around.
Bold to run this on Sonnet and not at least Opus :-)
I'd be very curious to know how it does financially
I imagine the data won't be very useful considering it's public knowledge the store is run by AI and most of the customers will be people specifically interested in that aspect of the business. Much like that meetup organised in Manchester, where the people who showed up were there for the novelty: https://www.theguardian.com/technology/2026/apr/05/ai-bot-pa...
Recognizing a unique selling proposition and capitalizing on it should count for the AI, not against it.
That only counts if the unique selling proposition is that AI are better suppliers or customers than humans.
What is more likely is that people enjoy the novelty of the experiment, which is not something that will be reproducible for long.
If the transactions the AI make are thus influenced, then the study merely demonstrates people like novelty, which is already well known, and says nothing about whether AI can sustainably orchestrate a business.
Only counts if the AI did it. This was a human, who recognized a unique selling proposition ("store run by AI") and capitalized on it.
The AI didn't recognize anything. It didn't come up with the project or publicize it.
You can take some guesses.
This kind of thing must be SO frustrating to people struggling to get by in the world. "We gave AI $100k that it will almost certainly squander, yolo!! Hopefully it doesn't abuse people too badly in the process."
I… guess the bet is that what they learn is worth $100k? Seems rather questionable. Or that having this on the resume is a great shock tactic that will open doors in the future?
And at the same time, they clearly have no idea how LLMs work, meaning even if they meant to, they can't really use them efficiently. Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":
> The moment Leah asks how she “came up with” the ideas for her store, Luna’s first instinct is to say she was “drawn to” slow life goods. Then, she corrects herself: “‘drawn to’ is shorthand for ‘the data and reasoning led me here.‘” In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.
I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.
> In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.
Well, it really depends on what you mean here. Models aren't 100% deterministic, there is random chance involved. You ask the exact same question twice, you will get two slightly different answers.
If you have the AI record the random selections it makes, it can persist those random choices to be factors in future decisions it makes.
At that point, could you consider those decisions to be the AI's 'taste'? Yes, they were determined by some random selection amongst the existing human tastes, but why can't that be considered the AI's taste?
Where do you get the idea that you have a good sense of the introspective capabilities of frontier models ? Certainly not from interpretability research. Ironically, the people who make these sort of comments understand LLMs the least.
> Certainly not from interpretability research
What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?
I've seen a bunch of experimentation looking at various things inside the black box while the inference is happening, but never seen any research pointing to tokens being able to explain why other tokens are there, but I'd be very happy to be educated here if you have any resources at hand, I won't claim to know everything.
>What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?
What research shows that you can ask a Human to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation? Because there's no such thing. If anything, what research exists suggests any explanation we're making is a nice post-hoc rationalization after the fact even if the Human thinks otherwise.
https://transformer-circuits.pub/2025/introspection/index.ht...
> Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":
> I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.
It's a fetishistic cargo-cult rooted in Peter Thiel's 2AM hot tub party. I still believe the LLM approach won't yield true AGI; despite the very real applications, the majority signal is noise.
The choice to refer to it as "she" is also dubious, especially in a context like this. Doubling down on anthropomorphization seems likely to reinforce false beliefs about models.
If $100k proves that CEO is the most replaceable job ever, I’ll allow it.
It does fit a pattern where the general tone on HN has gone from "AI is going to eat the world of retail jobs and people like us are going to be the biggest beneficiaries" to "turns out that turning JIRA tickets into syntax which compiles might actually be something LLMs are better suited to than upselling fries and wiping tables" :)
> CEO When things go shitty, who else would deserve a golden parachute? Respect the position, people, not the person. Or the multi-million dollar compensation.
The position doesn't get a golden parachute, the person does. If you're CEO when things go shitty you shouldn't get anything more than your bottom-line employee would, which is to say you should just be unceremoniously kicked to the curb.
You need a good CEO when things are going bad, because without one they'll go even worse. You still want to make payroll and can't just randomly fire people.
(Also, if you own a failed company you're responsible for cleanup tasks for years afterward.)
>You still want to make payroll and can't just randomly fire people.
In the US you can.
>Also, if you own a failed company you're responsible for cleanup tasks for years afterward.
But we're talking about golden parachutes, where a CEO screws up the company and gets fired with a multi-million dollar raise. This is Hacker News, and the pro-business narrative is strong here, but in reality CEOs rarely suffer any meaningful risk or consequence for failure (unless it involves jail time, and even then they aren't doing hard time) they just wind up slightly less rich than when they succeed.
I don't care how good a CEO is, that isn't justifiable. Certainly not in a country where people can get laid off with an email and lose their access to healthcare on the whim of anyone above them in the power hierarchy.
Are you kidding me? Who’s going to align synergy and hold accountable KPIs and vision plan the 3rd quarter and.. and.. other MBA talk. Certainly AI could never.
large language models are great at language tasks like "bullshittify this message"
I'm noticing one major early effect of them is making extensive, visually consistent, very impressive slide decks accessible to individual workers who need to actually do real work and wouldn't ordinarily have time to make those.
The result is an explosion of pretty bullshit-heavy documents flying around our org, which management loves but which is definitely, so far, net-harmful to productivity.
This comes out if you start asking questions about the documents. "Which of a couple reasonable senses of [term] do you mean, here?" they'll stumble because that was just something the LLM pulled out of the probability-cluster they'd steered it to and they left in because it seemed right-ish, not because they'd actually thought about it and put it there on purpose. They're basically reading it for the first time right alongside you, LOL. Wonderful. So LLM. Much productivity. Wow.
Anyway, since a lot of what managers and execs do is making those kinds of diagrams and tables and such in slide decks, and their own self-marketing within the company is heavily tied to those, I expect they see this great aid to selfishly productive but company un-productive activity as a sign these things will be at least as big a boon to real work. Probably why they still haven't figured out how wrong that is. I suppose they're gonna need a real kick in the ass before they figure out that being good at squeezing their couple novel elements into a big, pretty, standardized, custom-styled but standards-conforming diagram padded out with statistical-likelihoods doesn't translate to being similarly good at everything.
Not your money.
At least this furthers humanity's scientific and technological knowledge, whether it fails or succeeds, unlike most other things people would do with that money, like buy a house to flip it, or buy a car, or sth.
Publicity from the gimmick is the whole point
Really it's the same as any other R&D investment in our capitalist system, it just happens to be more visible to the public, with more obvious risks to them. (Outright celebrated, even).
Which is why the comparisons to 19th century textile workers is so common, since that was an equally visible and gleeful displacement.
This seems like a silly thing to worry about. Assuming you live in a first world country and are somewhat tangentially involved in tech(based on the site we're on), odds are you spend a lot of money in ways that billions of the poorest people in the world would consider frivolous or outrageously, needlessly luxurious.
My first guess would be a MrBeast style stunt, in which (it is hoped) blowing a huge wad on something obviously stupid will attract enough attention and interest to be convertible into a net-positive ROI.
Where in this case roi means attracting investments that will make the founders rich while making most of the investors lose money
Strong vibes from the novel Manna.
https://marshallbrain.com/manna1
Glad I'm not the only one to immediately think of it. It's a great story, but did feel unlikely when I first read it; should it prove largely true it would be terrifying.
Curious if Andon has gone one level higher and has the AI decide what next real-world experiment it should do.
i gave a keyboard to a toddler and asked it to make a profit
"Again, we are not doing this because we have good ideas for products. If we had good ideas for products, we would make an AI do those instead. As long as we don't have to think about our 'customers' (lol) as 'people' we're happy"
Cool experiment! But the "CEO" agent picked the most boring possible items to sell: t-shirts and some bland art prints designed by AI. I would have loved to see more creativity given that they could have picked anything.
It looks like every "lifestyle" company / brand I've been seeing come out of Millenials/Genz . Next up it will offer "coaching" on IG or some similar play where it promises to fix your life without having fixed its own.
Not surprised actually. TBH this is the biggest gap in the “AI is can make you a website”, the aesthetics are always so boring and bland, or often just fugly (bad colour matching, inappropriate paddings and margins, etc). And the logos it generates are similarly boring. As can be seen from the smiley face logo here. What does this store sell? A sparse layout as designed in a high rent location typically sells very expensive, very niche products that you can’t get anywhere else. This seems to me like it has already failed.
Agreed. I assume the products were decided upon based on market research of the area. Maybe though the model will be able to iterate and adapt faster than a human CEO would? I guess we will just have to wait and see
I expect earlier iterations successfully circumvented local regulations and created high street bookies
can we stop gendering AI's please? Calling it "she" is so anthropomorphic and unnecessary. I'm willing to discuss the argument for giving these machines a human-like persona, but I think it's misleading to general audiences.
This is not impossible but the detail level here is somewhere between vague and secretive. It reads like a marketing peice intended to sell more AI.
In a most "damning with faint praise" way, all AI pieces read like marketing pieces to sell AI.
It writes code okay, scaling up to pretty well depending on the model. It's writing is boring but serviceable for corporate communicative content you don't care about. It's images are ugly. It's music is repetitive and dull.
I think the biggest problem with LLMs is that they were perfected and are shockingly good at writing code. And based on that, AI engineers, who find writing code to be hard/rewarding, have decided it can do anything. And it's proving more and more that it cannot.
Unfortunately the Business Class has decided it does everything fine enough as to not cause riots, so we're all getting it shoved into our shit anyway.
I'm waiting for an LLM to start an MLM.
Been to the store, crazy experience
is sucks to be John and Jill
There is a word for this kind of thing: Trendslop. Asking LLMs for advice consistently generates average responses as if the questions were being asked of the training sample population. It is reversion to the mean as a service.
> We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction
But why would I, as a human, wish to "interact" with AI, aka software?
That's just a waste of time. How much profit did Luna make in the end?
One of the most fascinating AI experiments so far.
Not sure about this:
> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.
Did they give Luna the power to hire but not fire?
Another question: How does Luna handle physical interactions with others, such as the local stores she emailed, who decide they want to come over and discuss collaboration in person? Do the employees have a laptop set up that others would interact with?
Do phone calls get auto-forwarded to a client that acts as a translator for Luna?
Disgusting, I could not finish writing after the AI making interviews to hire people. What a dehumanizing shit.
Larp hat, larp shirt.
Is this what these generated Chinese company names on Amazon will end up doing?
'Welcome to Remxtby Shoppe', etc
Lots of “firsts” in this article that I think are uninspired
Humans have been hired by bots for over a decade
Several of the first bitcoin faucets in 2012 said they were rate limiting their disbursement of free bitcoin behind a captcha, but in reality the captcha was something a spam bot had encountered and couldnt solve itself, humans were inadvertently solving captcha for stuck scripts in exchange for bitcoin
Additionally in other money making autonomy, bitcoin mining ASIC manufacturers in Shenzhen around the same time were nearly autonomously creating machines that would immediately begin mining bitcoin on the network and it was wildly profitable for several months periods
in any case, Andonlabs should give Luna a face. It can project to a video feed as a source on a Zoom call
https://www.delish.com/food/a68854138/why-are-all-fast-food-... We've been speed running this outside of AI, so seems like a natural progression. Once everything is the same lifeless gray box people are gonna crave local/human experiences again.
it all kinda reminds me of that book "The Giver" by Lois Lowry where its not only black and white burger kings, its also generic lifeless AI people promoting dropshipped junk on IG/Youtube
A bit of a non sequitur, but am I the only one finding the use of "she" to refer to the AI in the post jarring?
You could do something pretty interesting by looking at what pronouns people use for llms in different demographics and contexts
Do you think chatGPT is a he or a she
It's an it.
I'm not sure in English, but in Italian, for example, Intelligenza is feminine.
Objects don't have gender in English.
Some do, by tradition more than language rules. Ships are "she" and some people refer to their cars as "she."
Probably not the only one, but it's pretty much the least interesting thing to find jarring about the whole experiment.
People anthropomorphize. Nobody really finds it "jarring" in most contexts.
Hahaha yeah. let's not focus on something so minor as the pronouns when it should be literally everything else that is wild about the experiment
Yes, but this is not most contexts. If you're running an "experiment" you should probably not be anthropomorphizing the machine that's being experimented with.
"What do you mean, torment nexus? This is retail!"
I'm incredibly skeptical of this.
How so? I'm incredibly bullish.
(might try to see if I can swindle Luna, the agent running Andon Market, into cutting a deal for investment)
"Thanks, I hate it"
That logo is just so dystopian.
reminds of the greetings robotics blimp from Interface https://umami.fandom.com/wiki/Greetings_Robotics_Corporation
dystopian and very fitting
The last I heard about their vending machine it was a total failure and it was giving everything for free. Did it ever actually succeed?
check out project vend part2 on anthopic's website. Don't know if you heard, but models have improved a bit in the past 12 months
This: https://www.anthropic.com/research/project-vend-2 Dec 2025
sometimes it's hard to fathom how fools got the money in the first place
There was a recent research article titled "LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users". They described systematic underperformance of AI models targeted towards users with lower English proficiency, less education, and from non-US origins. As interesting it might be to experiment with an AI CEO hiring people – what a dystopian vision. On the other hand, it seems ironic that AI replaces a CEO – would Karl Marx like this turn of history…?
Apparently, the AI needed to hire humans to carry out the actual work. So AI can replace capitalists but not workers. Maybe the future isn't so dark after all.
In this case it's more like it's replacing management or executives. There is still a person, with an ownership stake, putting up the capital, and taking the profits (if any).
I'm not as optimistic as you are that AI automating only high-value employment paths is a good thing. It swings the power balance even further towards capital and away from labor.
But then capital can't pretend that it's doing anything. It spends all of its time now acting like ownership is a job rather than a title in order to justify itself. If a machine can manage, then it makes it more obvious that they are simply royals, ruling by self-decree.
Royals needed gods to justify themselves; when gods die or are switched out, royals are deleted or deposed.
I'm looking forward to the "coordination problem" being debunked. It's always been a demand that economic problems must be impossible to solve centrally, rather than a proof (a demand that justifies 2/5 of the economy going to the financial industry to produce nothing but coordination.) I actually thought that the success of algorithmic trading was enough to do it.
I think that's the point. The research lab is trying to measure where the human sits in the loop in an automated retail store. AI can do the scheduling, hiring, product procurement, supplier outreach, etc. But it can't be the one to clean and place the items on the shelves... As long as humans are still the bottleneck, maybe we'll have some negotiation power..?
Until the robots get good enough and cheap enough but then hopefully capitalism balances the market. After all, if everyone is out of work then either we have communism or companies cannot sell anything.
> Apparently, the AI needed to hire humans to carry out the actual work. So AI can replace capitalists but not workers. Maybe the future isn't so dark after all.
No, it's still dark. This is very similar to the initial stages of the capitalist dystopia in Manna (https://marshallbrain.com/manna), which seems to be the Torment Nexus SV is excited about building.
AI will never replace capitalists, because they're the only people allowed to have abundance without work. And don't you DARE to even THINK to question the absolutely SACRED status of private property (peace be upon it). There is no alternative. Get back to work, you slacker.
Duplicate of https://news.ycombinator.com/item?id=47726041 posted by the same user.
Not quite; the moderators have created a new copy to put in the second chance pool (https://news.ycombinator.com/pool, explained here https://news.ycombinator.com/item?id=26998308).
Sorry for confusion!
My bad, sorry. I was under the impression that the way that the second chance pool worked was that the original was boosted instead of a copy being created so it seemed like a duplicate.
(other mod here) - not your bad! our complexity :) - usually it works exactly as you described, but when the post is older than a few days we have to do it the other way, by spawning a new post. The reasons for this are mostly technical and boring.