At that point the AI isn't answering a question anymore, it's just extracting answer from the input. Like the best I have seen around was someone writing almost an essay to AI as a question and surprise, the AI answered. Generally this goes to the same level of stupidity as all those who go around and "catch" AI's doing something they shouldn't do. (How pirates do it is set up local authenticating server, set all outbound Adobe traffic manually to target local machine and make their authentication server to always return approved.) Try same with Adobe which uses online whitelisting and even by knowing the formula, you cannot create product keys. Microsoft does use online blacklisting, as in when a PC with known pirated product key comes online and accesses Windows update servers, it gets product key invalidation signal but that is rare since there's is massive amounts of volume licenses which basicly cannot be blocked without causing a lot of problems for legal users. This is why they also didn't actually move "any" Win7 keys to be Win10 keys (and most likely same for Win11) but just used same generator formula for Win10 keys and only extended some parameters so Win10 keys wouldn't work with Win7. Microsoft is kind of notorious for not using any restrictions because any list would require space from the storage media or they would need to do online activation which is equally as problematic. Like it works only if you know the formula for the key generation and the company doesn't use any kind of black/whitelisting. Pretty shitty keygen in modern standards. Now THIS will make companies want to NERF Chat GPT.
0 Comments
Leave a Reply. |