The invasion of privacy is the obvious problem. But it isn't going to be functionally very useful to me either.
You're going to capture hours of walking and/or seemingly doing nothing, exchanging pleasantries/small-talk/banter. Without access to my thoughts, this is stuck in some superficial layer -- useless other than to maybe surface a reminder of something trivial that I forgot (and that's not worth it). Life happens in the brain, and you won't have access to that (yet).
Just to clarify, the “reminder” example was just one possible use case. The idea isn’t to capture everything or pretend to read thoughts, but to offer flexible support based on context you control.
Curious though: if there were a way for an AI to understand your thoughts, would that even be something you’d want? Or is the whole concept off-limits for you?
I don't know about humanity, but I wouldn't use such a thing. I think it's unconscionable to impose such surveillance on unconsenting others. I would actively avoid being near anyone who did, to the best of my ability, as well.
That makes me wonder --> how do you currently deal with your smartphone, which often has wakeword detection enabled (like “Hey Google”)? Or when you're visiting friends who have an Alexa device at home. Would that already be a problem for you?
I’m genuinely curious where you draw the line. Because in practice, many of us are already surrounded by passive listening systems, even if we don’t actively use them ourselves.
> how do you currently deal with your smartphone, which often has wakeword detection enabled (like “Hey Google”)?
I keep all of that stuff disabled.
> Or when you're visiting friends who have an Alexa device at home. Would that already be a problem for you?
Yes, it is, although I only have one friend who has such a device. I tend not to spend much time at their place. If someone I knew had a wearable that was analyzing/recording everything, and they refused to remove it, I'd minimize the amount of time I'd spend with them.
> I’m genuinely curious where you draw the line.
"Drawing the line" isn't really the way I'd put it. As you say, we're surrounded by surveillance devices and there's little I can do about it. All I do is avoid them wherever it's possible for me to do so and be sparing/cautious about what I say and do when I'm near surveillance that I can't avoid.
Totally fair perspective, and I respect how thoughtfully you approach it. It’s helpful to hear where others draw boundaries, even if the whole landscape feels hard to escape.
Thx
Sounds like Jeff Duntmann's "jiminy", which he wrote about in PC Techniques magazine back in 1992. A matchbox-sized general purpose computer and life log, infrared connections to peripherals as needed, with a mesh network to other jiminies and the internet-at-large. Jeff didn't use the term "AI", but that would describe how it worked.
> A matchbox-sized general purpose computer and life log, infrared connections to peripherals as needed, with a mesh network to other jiminies and the internet-at-large. Jeff didn't use the term "AI", but that would.
Notwithstanding that most of the mobile OS’s are locked down more than some would prefer for a “general purpose computer” (but less than is likely for a porta-Grok), and that most devices are bigger than a matchbook to support UI that wouldn't be available in that form factor (though devices are available in matchbook size with more limited UI), and that it mostly uses RF like Bluetooth instead of IR for peripherals because IR works poorly in most circumstances, isn’t that what a smartphone is?
There's one thing AI can't do, and that's actually care about anyone or anything. It's the rough equivalent a psychopath. It would push you to a psychotic break with reality with its sycophancy just a happily as it would, say, murder you if given motive means and opportunity.
I imagine it helping me remember things or stick to my habits. For example, if I want to lose weight and have a weak moment, it could remind me—like when I order a burger.
It could also help me use my time better. If it knows what I’ve been doing lately, it might give me useful tips.
So overall, more like a coach or assistant for everyday life.
Right so now you're giving it your brain and body to run and giving it the kind of trust and faith you had previously reserved for humans and it's already managed to sever many of the ties to reality that were your layers of protection against having a psychotic break. It is not, emphatically not a person and if you allow yourself to start to treat it like one and build a relationship with it then you already have one foot over the edge of the precipice
Then, we are in Matrix ;)
But I think this is already one step ahead of my idea ;)
But I think I know, what you mean - the human aspect should not be lost in the process.
Do you see a chance for the future, what can unite the two aspects, i.e., supportive AI without losing the human touch? How could this be ensured?
It is the universal experience of life and death that is the source of the human touch, as well as well as the incredible ability for empathic bonds to be formed even between animals of different species.
You want the human touch, make unique individual entities which experience life and death. Brb gotta go play with my cat.
I agree that real empathy comes from lived experience. But I wonder if there’s room for something softer: systems that don’t pretend to be human, but still support us in deeply personal ways.
I’m building something that tries to stay on the right side of that line: not replacing human touch, but amplifying it. Curious how you'd draw that boundary.
I wouldn’t trust a “god made by man” either ;)
That’s why I’m building something small, local, and transparent—more like a tool than a deity. Curious what kind of system (if any) you would trust?
The core idea is that the AI runs locally on the device, and all data is stored on the device. Therefore, no data will be shared or sold to other companies.
Regarding anonymization --> do you mean, what if I pointed the camera at someone else? That would be filtered out.
The invasion of privacy is the obvious problem. But it isn't going to be functionally very useful to me either.
You're going to capture hours of walking and/or seemingly doing nothing, exchanging pleasantries/small-talk/banter. Without access to my thoughts, this is stuck in some superficial layer -- useless other than to maybe surface a reminder of something trivial that I forgot (and that's not worth it). Life happens in the brain, and you won't have access to that (yet).
Just to clarify, the “reminder” example was just one possible use case. The idea isn’t to capture everything or pretend to read thoughts, but to offer flexible support based on context you control.
Curious though: if there were a way for an AI to understand your thoughts, would that even be something you’d want? Or is the whole concept off-limits for you?
I don't know about humanity, but I wouldn't use such a thing. I think it's unconscionable to impose such surveillance on unconsenting others. I would actively avoid being near anyone who did, to the best of my ability, as well.
That makes me wonder --> how do you currently deal with your smartphone, which often has wakeword detection enabled (like “Hey Google”)? Or when you're visiting friends who have an Alexa device at home. Would that already be a problem for you?
I’m genuinely curious where you draw the line. Because in practice, many of us are already surrounded by passive listening systems, even if we don’t actively use them ourselves.
> how do you currently deal with your smartphone, which often has wakeword detection enabled (like “Hey Google”)?
I keep all of that stuff disabled.
> Or when you're visiting friends who have an Alexa device at home. Would that already be a problem for you?
Yes, it is, although I only have one friend who has such a device. I tend not to spend much time at their place. If someone I knew had a wearable that was analyzing/recording everything, and they refused to remove it, I'd minimize the amount of time I'd spend with them.
> I’m genuinely curious where you draw the line.
"Drawing the line" isn't really the way I'd put it. As you say, we're surrounded by surveillance devices and there's little I can do about it. All I do is avoid them wherever it's possible for me to do so and be sparing/cautious about what I say and do when I'm near surveillance that I can't avoid.
Totally fair perspective, and I respect how thoughtfully you approach it. It’s helpful to hear where others draw boundaries, even if the whole landscape feels hard to escape. Thx
Sounds like Jeff Duntmann's "jiminy", which he wrote about in PC Techniques magazine back in 1992. A matchbox-sized general purpose computer and life log, infrared connections to peripherals as needed, with a mesh network to other jiminies and the internet-at-large. Jeff didn't use the term "AI", but that would describe how it worked.
http://www.duntemann.com/End14.htm
Elon Musk's portable-Grok-thing is a long step toward the jiminy idea.
> A matchbox-sized general purpose computer and life log, infrared connections to peripherals as needed, with a mesh network to other jiminies and the internet-at-large. Jeff didn't use the term "AI", but that would.
Notwithstanding that most of the mobile OS’s are locked down more than some would prefer for a “general purpose computer” (but less than is likely for a porta-Grok), and that most devices are bigger than a matchbook to support UI that wouldn't be available in that form factor (though devices are available in matchbook size with more limited UI), and that it mostly uses RF like Bluetooth instead of IR for peripherals because IR works poorly in most circumstances, isn’t that what a smartphone is?
No. Photography is a hobby, and even with a DSLR not hooked to any networking there are some things that just shouldn't be recorded.
For instance I've never brought my camera to a funeral. Most daily life deserves the right to be forgotten.
Then there are privacy laws, etc.
Absolutely not.
Trust it with what? Trust it to do what?
There's one thing AI can't do, and that's actually care about anyone or anything. It's the rough equivalent a psychopath. It would push you to a psychotic break with reality with its sycophancy just a happily as it would, say, murder you if given motive means and opportunity.
I imagine it helping me remember things or stick to my habits. For example, if I want to lose weight and have a weak moment, it could remind me—like when I order a burger.
It could also help me use my time better. If it knows what I’ve been doing lately, it might give me useful tips.
So overall, more like a coach or assistant for everyday life.
Right so now you're giving it your brain and body to run and giving it the kind of trust and faith you had previously reserved for humans and it's already managed to sever many of the ties to reality that were your layers of protection against having a psychotic break. It is not, emphatically not a person and if you allow yourself to start to treat it like one and build a relationship with it then you already have one foot over the edge of the precipice
Then, we are in Matrix ;) But I think this is already one step ahead of my idea ;)
But I think I know, what you mean - the human aspect should not be lost in the process. Do you see a chance for the future, what can unite the two aspects, i.e., supportive AI without losing the human touch? How could this be ensured?
It is the universal experience of life and death that is the source of the human touch, as well as well as the incredible ability for empathic bonds to be formed even between animals of different species.
You want the human touch, make unique individual entities which experience life and death. Brb gotta go play with my cat.
I agree that real empathy comes from lived experience. But I wonder if there’s room for something softer: systems that don’t pretend to be human, but still support us in deeply personal ways.
I’m building something that tries to stay on the right side of that line: not replacing human touch, but amplifying it. Curious how you'd draw that boundary.
What percentage of things do you really forget?
Have you read about procrastination / resistance? The issue is not an absence of nagging but unresolved emotions / burnout etc.
I don't trust god, I don't trust man, why would I trust a god made by man?
I wouldn’t trust a “god made by man” either ;) That’s why I’m building something small, local, and transparent—more like a tool than a deity. Curious what kind of system (if any) you would trust?
Not only won't I help you build such a system, I'll hack it if you try :-)
Good to know, I will double check commits if I see your name haha ;)
no
Could you give more specific reasons?
No.
Could you give more specific reasons?
No
Because I don't want my life to be training data for some stupid LLM so that you can try to become to the next tech oligarch.
Would it make a difference if the data were only stored locally on your device?
It might, depending on how you intend to anonymize the data, and whether you intend to market that data to other companies.
The core idea is that the AI runs locally on the device, and all data is stored on the device. Therefore, no data will be shared or sold to other companies.
Regarding anonymization --> do you mean, what if I pointed the camera at someone else? That would be filtered out.
Rewind was quite good
That's a good point. Did you use it? If so, what did you use it for? Could you imagine that something like rewind but with camera could be useful?
Yes, I would trust this 100%.
Is there anything you’re thinking about where this might be useful for you?
No!
Long answer: fuck no. Not even if it was free. Not even if it was libre software.
So even if it were open source, that wouldn’t be a reason for you to trust it?
Just curious: what would have to change for you to even consider it? Is it more about the concept itself, or the way it's implemented?