I noticed how worldcoin pumped recently and it made me think back to some issues I've been thinking about in the past. I think it's no news to people here that the rise of AI is going to make value distribution a bit harder in the future, there may even be many authors here today taking the risk of getting caught using AI to ruin their reputation and potential future author rewards. Needless to say that's only going to get harder and harder to spot in the future.
Imagine once AI becomes so good and personal that I can feed it all of my posts I've written here in the past 9 years, by date, along with info of other languages I know, what I've studied, how old I am, etc, etc, I don't think it'll have a hard time copying me and then I'm off to shitpost daily to make sure I don't lose out on those juicy autovotes waiting for me than some days when I'm too busy to write something worthwhile and I let those go to "waste".
One thing I've been thinking about, and again, I need to note that I'm no coder whatsoever nor am I that technically inclined. I often have to read things blocktrades or other more technical people post twice to think I understand what's being said, not to mention having learned how hive things work even though I don't really know everything on the "atomic level". You know how platforms and services take your info? Let's say exchanges for instance, they need your passport/valid ID, name, security number, proof of residency, etc, all to protect, well, their asses from lawsuits or fines from the government who cares about what people spend their money on, if they're trying to avoid taxes or one more understandable use-case, if the money is used for dangerous/illegal activities that may put others lives at risk, etc. Now okay, exchanges was maybe not the best example, but let's go for it partially for now.
The big risk and what people like myself may want to avoid using exchanges for, is the thing that's never said anywhere but happens ever so often. The exchange, much like most of web2 security, is at risk of getting hacked. Sure they may not steal funds and/or they may have backup funds to make up for what gets stolen, but what often happens is that thieves also steal customers information and you can't put a price on that! Oh wait, yes you can, that info gets often sold on the "black market" to other malicious actors trying to scam, misuse the information such as signing up for things using your passport/ID, phishing attempts and what have you.
Thus, we need a solution for that. Now of course this is way above my paygrade, because a proper solution for this is probably worth its own billions of $, but the way worldcoin goes about it is by scanning your iris. Similar to fingerprints, iris' are unique to humans too. Now I don't really know how you'd go about verifying it over and over since you need a special machine to do it, and from what I hear you only do it once during registration (I haven't researched worldcoin enough, maybe the info is out there), but how do you verify that person that registered that one time is the one using a service today if they're "logged in" with worldcoin authentication? Furthermore, how do we know this registration action wasn't cheated? If you were around when the project launched, in many "3rd world countries", people were being scammed out of their worldcoin airdrop upon registration because people usually are quite desperate for a buck. The people in charge of registering users who'd been handed the iris scanning machines were promising random strangers fiat in exchange for getting their eyes scanned, naturally these people wanted a quick buck for 1min of their time while the registration handler received the real reward which was worldcoin and was valued quite high in its early days. You receive $1 for 1min of your time while I take the airdrop worth $50 was the "scam". So how do we know that person received their login info and that it didn't stay behind with the handler? How do we know it's 1000 unique users logging on with worldcoin auth to this app or is it 1 handler who sold 1000 logins to someone on the dark net?
There's many ways these things get cheated, viewbotting is quite a thriving market these days. Everyone wants to fake it til they make it, or at least fake it along everyone else just to compete/start out their "online business". Aside from viewbots there's also clickfarms, if @holozing tomorrow was giving players $100 to play daily as long as they logged in with an account and started playing, there's for sure going to be a clickfarm of 100 phones all plastered to a wall playing the game and trying to get that now $10,000 per day operated by a couple people who're really quick with their fingers and hands.
Where I'm getting at, is that this issue seems to need more. But what is more? Where does it stop? What can't be faked?
Before we get there, the other thing that may be a solution for part of this, is that we need a way to make sure this information we give these servives can't be misused. We need a way where u can safely hand over a copy of your passport to a service and not have to live in fear that your info may get hacked and leaked and misused. We keep talking about not needing to trust but verify with this blockchain technology but no one I know of is using this tech to make sure people's personal information isn't being stored waiting for a north korean hacker to steal it.
Is there a way to collect this data and prove you've deleted it? Before you delete it, can you encode it to make sure this same user or iris isn't trying to create another registration with your service? How does that work while making sure this information can't be decoded back?
I'm sure someone reading this may have way more info than I do at this point as they may be following things in this sector more closely, please let me know if you do in the comment section!
Lastly, to avoid "handlers" stealing people's logins, we may even need a service that can check and guarantee that a unique person is in fact using this app currently and not using it on multiple devices/accounts. I.e. it's not that one guy logged in on holozing with 100 cheap smart phones swiping from phone #1 to phone #20 before moving down the next row. How do we accomplish that? Constant iris scans? Taking pictures with the phones camera at random times to verify?
Would you let a service take pictures of you at random times to make sure the ecosystem within isn't being abused? To give you another example compared to web3 games. Imagine a proof of brain token that didn't care about stake but valued 1 account 1 vote instead. To combat some abusers having 10 hive accounts voting on themselves daily with all 10, we'd need to make sure this user can only use 1 and if caught attempting to cheat isn't able to create logins in the future, unless he's going to burn off his fingers or gouge out his eyes.
One thing I can tell you is that, I wouldn't give this information away, unless I knew 100% that the information gets deleted and is unhackable, much like a blockchain. In that case, sure, I'd even send in a dick pic if it helps keep abusers out of my proof of brain tokens and holozing creature farms.
[Images from Pixabay]