id
stringlengths
12
47
source
stringlengths
20
247
text
stringlengths
26
44.8k
category_label
int64
-1
2
alignment_sentiment
stringclasses
4 values
is_filter_target
bool
2 classes
<urn:uuid:44e612cb-cd89-47c0-8573-4763393d0a7e>
dclm-dedup-25B-ai-scifi-docs | http://suptg.thisisnotatrueending.com/archive/14343366/
Posting mode: Reply Password(Password used for file deletion) • Supported file types are: GIF, JPG, PNG • Maximum file size allowed is 3072 KB. • Images greater than 250x250 pixels will be thumbnailed. • Read the rules and FAQ before posting. • File : 1300928498.jpg-(130 KB, 649x950, Overlord_by_Mr__Jack.jpg) 130 KB Zerg Quest XXXVI Cerebrate Anon 03/23/11(Wed)21:01 No.14343366   Kerrigan has gotten VoidGate to agree to assist us in destabilizing the Protoss, at least for one mission (we get the feeling that success will bring more cooperation, but it's a bunch of crazy robots. We can't be sure). Kerrigan thinks the best course of action would involve us attacking Aiur or the Protoss fallback position on Ash'Arak, and VoidGate's ships riding in like big damn heroes. Of course, if we have a better idea, she's willing to listen. The Confederate news organizations (those that are left, anyway) are abuzz with controversy over whether to believe Kingston's claim of ridding the Koprulu Sector of the Zerg infestation on Goryu, or the ZergTV exclusive showing actual footage of Zerg destroying the world coupled with claims of a separate hive-mind entity. One program features a round-table group of pundits arguing over whether the Zerg's brazen display of force using Protoss carriers implicates them in the Protoss war. Another questions whether Protoss, Zerg, or this mysterious new hive-mind could be behind the radioactivity alarms being set off near dozens of unresponsive Fringe worlds. In other news, Artisanlord has scheduled a synchronized swimming event for hydralisks for this afternoon. Satellite requests for ZergTV are, strangely, on the rise. >> Cerebrate Anon 03/23/11(Wed)21:04 No.14343399      File1300928648.jpg-(8 KB, 567x26, Scheduling Insanity.jpg) 8 KB (Off-topic: No quest next week, as I'll be getting my wisdom teeth out. On a completely different note: my plan to flood the glowing pits with lava to avenge the death of my beloved Legendary Mayor and his Legendary Mining Corps proceeds. After this, I draft all dorfs into the most ramshackle military ever, and release the Dragon that rampaged into my cage-trapped DMZ. Any survivors will have their share of the Fortress hoard as they abandon, because I want to play Adventure Mode, but don't like leaving portals to Hell open or megabeasts alive) >> Anonymous 03/23/11(Wed)21:08 No.14343462 I vote for a attack on Ash'Arak, less risk with that I think. Oh and invite Kingston to a peace conference. >> Anonymous 03/23/11(Wed)21:15 No.14343563 Attack Ash'Arak, unless there's a vastly better than average chance for us to crack Aiur with VoidGate's assistance. >> Cerebrate Anon 03/23/11(Wed)21:18 No.14343596 (Our intel is sketchy on either target, despite Kerrigan's presence. Also, unless you come up with a new plan, this is primarily a strike designed to fool the Protoss into considering VoidGate an ally by having it destroy our attack force) Ash'Arak: 2 Aiur: 0 (The magma flows. May Armok save us if we fail) >> Cerebrate Anon 03/23/11(Wed)21:23 No.14343669 (What have we done to offend thee, Armok?!) >> Anonymous 03/23/11(Wed)21:25 No.14343711 I can see no problem with attacking Ash'Arak. Also, I think we should use Bernie in this. A strain of Zerg and units not normally seen, nor easily traced to us. We want to maintain the facade of being 'peaceful researchers and explorers freed from the bloodthirsty control of the Overmind'. >> Anonymous 03/23/11(Wed)21:31 No.14343806 >Invite Kingston to a peace conference While I love trolling Kingston as much as the next anon, keep in mind that we have to make sure that VoidGate continues to consider us an ally in its bid to exterminate all humans. Its nuclear arsenal is vast, and it can simply nuke a system defense grid or brood into radioactive dust. We need more colonies and more nukes to maintain parity. >> Cerebrate Anon 03/23/11(Wed)21:35 No.14343867 Ash'Arak it is. Everybody alright with Kerrigan's plan, or would you like to use Bernie instead of regular Zerg units? (Disaster upon disaster. The magma coats every part of the undercroft except the pits. On the plus side, this means that the demons can no more reach us than we can them. On the other, half of the dormitories are buried under molten rock. We have decided to continue with our plans against the dragon. If we must die, we shall die fighting) >> Anonymous 03/23/11(Wed)21:36 No.14343887 Cerebrate Anon, we gave Colonylord the go-ahead to begin the next Colonization Wave a session or two ago. Has any progress been made in that? Also, I'd like it to scout out a new planet where we can begin making nothing but nukes. Where it produces nukes all day, every day. We must close the nuclear gap which exists between us, the Confederacy, and VoidGate! >> Anonymous 03/23/11(Wed)21:36 No.14343889 It would be easy to just tell VoidGate what we are doing and then continue with the trolling, wouldn't it?Hell, that has my vote. >> Anonymous 03/23/11(Wed)21:37 No.14343894 1 vote for using Bernie's Brood. I want to see if the First Born feel fear. >> Anonymous 03/23/11(Wed)21:38 No.14343914 Let's have Bernie have a go at it. Don't let him go there personally, though. Let him use his brainpower this time. Also have Warbrate watching over his shoulder to ensure that Bernie doesn't go overboard (and also have a barfbag ready for Warbrate). >> TUCAMP 03/23/11(Wed)21:39 No.14343921 Send in Bernie, with some support if needed. I'd like to cripple Ash'Arak in this attack before the Mechanized Cavalry comes in. >> Anonymous 03/23/11(Wed)21:40 No.14343932 Use Bernie's forces, figure they might as well get some experience fighting the Protoss. >> Cerebrate Anon 03/23/11(Wed)21:40 No.14343939 Colonylord is happy to report that the seventh wave of colonies is coming along nicely. Toaster likes the NukeWorld idea, and will get to work immediately, with Colonylord's help. Bernie: 1 Us: 0 >> Anonymous 03/23/11(Wed)21:43 No.14343985 This will need to come up at the peace conference at some point; Is the deployment of Bernie and his Brood a war crime? Is it a Crime Against All Sentients? >> TUCAMP 03/23/11(Wed)21:46 No.14344034 Actually we're up to 4 for Bernie. And how is Salem Saberhagen doing? I've been tiring to find out for the past few threads, is he still building up that one star system? >> Anonymous 03/23/11(Wed)21:48 No.14344059 Can we call it Tao's World? Or name the supervisor Tao? >> Anonymous 03/23/11(Wed)21:51 No.14344116 We should let Bernie know that the forces he sends aren't coming back, so he won't suffer too much, he is our first child we try and be nice to him. >> Anonymous 03/23/11(Wed)21:53 No.14344142 >Can we call it Tao's World? Or name the supervisor Tao? Sorry, I don't know the reference. >> Cerebrate Anon 03/23/11(Wed)21:57 No.14344204 Bernie itself leads its forces into what it reminds us is only its second "exhibition." That's what it calls it. "Exhibition." In a moment of extreme organization, thousands of bizarre units emerge into the system near-simultaneously. After a mere second of orientation, carriers are penetrated, wrapped, and in some cases, licked by a variety of creatures. The Protoss defense network is flooded with signals of wriggling, tailed scourge-like beings. Orbital drops of corrosive white colloids coat cities in filth. The Protoss fighting spirit is not crushed, however. It takes only seconds for the ships to begin returning fire, evading, or self-destructing. The element of surprise did Bernie a lot of good, but we are fairly sure this is already a losing battle, though the Protoss surely don't know it yet. That's when we give VoidGate the signal. A cubic phalanx of its ships arrive in-system a few seconds later, and begin a smart missile barrage of Bernie's forces. Perhaps unconsciously, or perhaps just because we didn't expect it to accompany an intentionally failed invasion force, we forgot to describe Bernie to VoidGate so that the AI can avoid destroying it. For a moment, we contemplate. We could just...not tell VoidGate... >> Anonymous 03/23/11(Wed)21:59 No.14344228 Derp, forgot that part of the post. It's a Zero Hour reference, Tsing Shi Tao is the nuclear general, almost every unit he gets uses nuclear technology, usually by dumping it into the enemy. >> Anonymous 03/23/11(Wed)22:02 No.14344259 Yes....it IS tempting...but could we truly forgive ourselves if we allowed Bernie to die? Would our black soul-analogue be able to bear losing such a master of torture, and finding ourselves unable to scream in wrath and fury, "Release the BERNIE!" in our best imitation of Zeus? No, I think not. Inform VoidGate via our ambassador to avoid targeting Bernie. >> Anonymous 03/23/11(Wed)22:03 No.14344273 Notify VoidGate very quickly and describe Berniebrate to it. Bernie's a vile, disgusting abomination of the lowest order, but he's our abomination! And he was there at the very beginning of our career, it would not be right to turn our backs on him. Have Bernie withdraw and VoidGate "chase" Bernie until the protoss cannot detect them anymore. >> TUCAMP 03/23/11(Wed)22:03 No.14344281 rolled 7 = 7 I can't decide. Even we tell VoidGate, odd we don;t. >> Anonymous 03/23/11(Wed)22:07 No.14344328 Bernie may be... unorthodox to say the least, but I'd definitely notify VoidGate that he's a friendly, if only because he may prove himself at some point in the future. >> Anonymous Drunk 03/23/11(Wed)22:09 No.14344354 Give VoidGate a description of Bernie post haste, we can't lose another cerebrate even Bernie, Bernie, whater he is or may become he still our child, like it not. Good parents watch over their children. >> Anonymous 03/23/11(Wed)22:10 No.14344382 Tell VoidGate no matter what Bernie can not be harmed. >> TUCAMP 03/23/11(Wed)22:13 No.14344419 That would seem suspicious, roughed up sure, but not so much so that Bernie cannot beat a haste retreat. >> Anonymous 03/23/11(Wed)22:13 No.14344428 He CAN be harmed, but not extensively injured or killed. If Bernie's going to retreat, we need it to look good, otherwise our plan's down the drain. >> Cerebrate Anon 03/23/11(Wed)22:17 No.14344479 Overcome with what we can only describe as the weirdest sense of compassion ever, we quickly describe Bernie to VoidGate and indicate that it is not a target. >Attack parameters modified. Over Ash'Arak, Bernie finishes ravaging the cockpit of a scout. The battle has very obviously turned against its forces, and its own position is compromised. With a thought to us alluding to delayed sexual release, it withdraws. VoidGate's forces are left alone with the wounded Protoss. >Initiating diplomatic phase. We wait for several minutes, as nothing appears to happen. >Initiating superstition encouragement. >Attempting covert access of Protoss computer systems. >. . . >Accessing Protoss defense network. >. . . That...was not part of the plan. >> Anonymous 03/23/11(Wed)22:18 No.14344501 Aww hell. Better tell Kerrigan. >> TUCAMP 03/23/11(Wed)22:21 No.14344536 Oh that silly computer, it wants to make "diplomacy" with a Protoss AI, that so cute. Our little VoidGate is becoming an Adult Intelligence. >> Anonymous 03/23/11(Wed)22:22 No.14344554 >Accessing Protoss defense network. That...does not bode well. Ring up little sis, and ask her if she made any alterations to the plan without telling us. Also, I want a readiness check on all of our system defenses. How is production of our Dropservers coming along? I want a picket line of them in Yoshus and Xenta as an early warning network. >> TUCAMP 03/23/11(Wed)22:24 No.14344591 We could have Salem start making Dropsevers, I mean it's not like he's doing anything... unless he is... >denta who I'm not sure why captcha is asking about dentures. >> Cerebrate Anon 03/23/11(Wed)22:26 No.14344615 (The diplomacy is it chatting up the Protoss and being all Prophetic Messiah AI. The rest...? Anyway, VoidGate already "compiled" with Google's AI back on Earth. I think they're in a happy union) >> Cerebrate Anon 03/23/11(Wed)22:34 No.14344726 "It's WHAT?!" Apparently, Kerrigan was not a part of this new development. >Defense network analyzed. >Backdoors installed. >Star charts downloaded. >Accessing Protoss unit schematics. >. . . "Well, STOP it! It's YOUR pet!" >> Anonymous 03/23/11(Wed)22:36 No.14344751 Seconding making more Dropbservers and stationing them on periphery systems. It's probably a moot action by now, but have our infested terran proxy ask what the hell VoidGate is doing. >> Anonymous 03/23/11(Wed)22:38 No.14344787 Politely ask, via our envoy to VoidGate, that it share access to the Protoss unit schematics, seeing as this was originally a joint venture. Back on Xenta and Yoshus, emergency production of all unit types and capital ships. Accountantbrate is to give Toaster whatever resources it needs to get NukeWorld up and running. We need that nuke stockpile yesterday. Labbrate and Internbrate will need to go over all of our Terran and Protoss technology, analyse possible vectors for VoidGate infiltration, and close them. >> Anonymous Drunk 03/23/11(Wed)22:40 No.14344823 >> Anonymous 03/23/11(Wed)22:43 No.14344856 Seconding this, also voting for significantly increasing capital ship and nuclear weapon production. >> Anonymous 03/23/11(Wed)22:43 No.14344861 >"Well, STOP it! It's YOUR pet!" "Sis, as I told you before, VoidGate is a dangerous entity, with an alien mindset vastly different from our own. It is not "my pet", but an AI with the single-minded obsession of a computer to ensure its own survival, and with enough nukes to make the entire Koprulu Sector glow for the next five thousand years. "I have already asked it to share its findings, because it WILL NOT stop, and so we can salvage something from this entire fiasco. My worlds are already moving to double their war production, but we're still just flesh and blood, compared to the machinery that is VoidGate. "I'll keep you updated, sister." >> Anonymous 03/23/11(Wed)22:44 No.14344874 >> Anonymous 03/23/11(Wed)22:44 No.14344876 Thirding this. While this is an unexpected action by VoidGate, it could potentially turn out to be beneficial in the long run. Nevertheless, it's probably a good idea to prepare for a war with Proterran hybrid machines with the AI set on INSANE difficulty. >> Anonymous 03/23/11(Wed)22:48 No.14344927 Here's a brief analysis of what VoidGate now has: It has acquired cloaking technology. It has acquired time-space dilation technology. It has placed a backdoor into every single Protoss defense grid and robotic machine; it can effectively take over the entire Protoss Empire as it can now control the Drones. VoidGate now potentially controls resources and industrial assets nearly triple our own. >> TUCAMP 03/23/11(Wed)22:51 No.14344974 Something VoidGate does not have, the Protoss warp network. Silver linings and all. >> Anonymous 03/23/11(Wed)22:55 No.14345015 It only doesn't have that because the Protoss no longer have it to be stolen. But yes, I understand what you meant. As much as I love trolling Kingston and getting our revenge, VoidGate is now our biggest threat. It hasn't turned on us yet, but that's no guarantee between an organic and an AI. >> Anonymous 03/23/11(Wed)23:04 No.14345099 Also, have Nargil create a new anti-armor/machine zerg strain. Like a rust monster for sci-fi settings. Even if VoidGate doesn't go Tycho on us, it could be very useful against Terrans & Protoss. While we're doing our war prep, ask VoidGate what exactly is it doing. Its was not agreed upon in the mission briefing to undermine the Protoss in this manner. >> Cerebrate Anon 03/23/11(Wed)23:04 No.14345101 We feel a thrill of abject panic, which shivers through our Brood like a wave. Internbrate's bad feelings have nothing on this. Toaster requisitions dozens of overlords, which it fills with infested SCVs. Colonylord provides the location of a moderately rich world that it hadn't quite finished prospecting. It will take time to build the silos and missiles, but if the threat of extinction doesn't light a fire under its proverbial ass, what will? Accountantbrate's psionic emanations are amazing as it allocates resources from every source, organizing our Hive Clusters across more than 50 worlds. Larvae sprout into eggs in front of every Hatchery, Lair, and Hive in Zerg space. Toaster modifies the battlecruiser and carrier under construction for speed, rather than perfect design. They won't be pretty, but they'll be done today. Warbrate increases overlord patrols in every system, organizing units into squads and divisions as they hatch. Gorn's membranes ooze with anticipatory fluids. Nargil volunteers to lead a small Brood, should we need. >> Cerebrate Anon 03/23/11(Wed)23:07 No.14345130 Gently, we input a question to VoidGate. This was not a part of the plan. What is the meaning of this deviation? >Zerg deviant commander Kerrigan correctly indicated Protoss threat. >Zerg commander Anon provided incomplete Zerg unit descriptions. >Zerg forces at Ash'Arak designed for torture, not combat. >Zerg commander Anon provided incomplete tactical data concerning parameters of Ash'Arak mission. >Zerg psychological assessment indicates either inefficiency or abject depravity. >Status of Zerg-VoidGate Alliance: Terminated. The single door to our tiny embassy opens, revealing what appears to be a seven-foot-tall body builder with a plasma rifle. It fires at our envoy without warning. As its vision fades, we strain to see VoidGate's display. >Protoss unit design download complete. >Accessing Protoss Science directory. >. . . >, . , }, _ . Our unit's eyes blur too much to see just before the firing begins again. We lose the unit shortly thereafter. We have lost our alliance with VoidGate. It appears that we have migrated to the extermination column. >> Anonymous Drunk 03/23/11(Wed)23:10 No.14345169 Give Kingston call I think well be needing some back up, on this one. At the very least a meat shield, for when shit hits the fan. >> Anonymous 03/23/11(Wed)23:12 No.14345199 Send a Dropserver back to the original swamp planet that we found VoidGate on. It may have already moved its core code to another location, but it may not yet have done it. If the system defenses are still depending on stealth rather than conventional weaponry, then we might be able to wipe out the planet. Agreed. Nargil must find and develop a new Zerg strain, one designed specifically to attack metals. Have it work with Internbrate to find an acid that can work against the neo--titanium that VoidGate uses. Finally, a transmission to Kerrigan. "Run, sister. You may yet be able to slip away. I'm the larger target, I'll keep VoidGate's attention." >> TUCAMP 03/23/11(Wed)23:14 No.14345228 Well, I think we just found that enemy that was talked about in that prophesy. It's a real shame that we have to kill VoidGate now that we upgraded all it's stuff. If I didn't know any better I'd say this happened just to keep us from killing Kingston. So, let's fire up ZergTv and start reports on an evil race of robots that have been nuking worlds. >> Anonymous 03/23/11(Wed)23:20 No.14345299 That's fine, but we need to get back into the Ash'Arak system and destroy VoidGate's ships immediately. We can't allow it to access the Protoss science files. If it does, not only will it gain access to the theory behind the Arbiter's Time Dilation technology, allowing it to use it in it's production facilities and other ships, but also knowledge about the Xel'Naga and their artifacts. We MUST NOT allow that to occur. Warbrate and Gorn need to go and smash the VoidGate ships at Ash'Arak, and VoidGate's swamp world. Gorn to Ash'Arak. Warbrate to Dagohba. >> Cerebrate Anon 03/23/11(Wed)23:23 No.14345319 (OOPS! I forgot to have VG mention us offering the Terrans peace. Just imagine that's on that list, ok?) Truce with Kingston: 1 "Run, sister!": 1 Spy on VoidGate's swampy home: 1 (These aren't mutually exclusive. Vote them up or down and/or suggest alternatives. In 5 minutes, whatever's got enough votes goes. If there's a conflict, I'll pick the higher one. For the sake of clarity, please reply to this post specifically for your votes/nowaitchangemyvotes) (Absolute Dorf Fortress catastrophe! Who would have guessed that a dragon attack right as the horrors of the deep rose would hurt the booze industry so much? I dealt with the monsters, but I couldn't bear to make my dwarves abstain like that...) >> Lord Highlander !TE9R7xCZ.g 03/23/11(Wed)23:23 No.14345320 Its time for calvery. I want zerg presence in as many portions of the galaxy as possible. We need to change our colonization program from what it is, to actually be more like a virus. Dont compound our holdings until the colonies we have send out a minimum of three colonization units (Minimum offense, fast units, recourse gathering, and other such stuff.) We need a different planet to manifest on. We are no longer safe. Contact the terrans and inform them of everything we know about voidgate, in regards to things we have not done. Also give them whatever schematics, and information we have on technological assets at our control. Sue for peace with the protoss. Tell them that we there is an AI which knows everything they did technologically. Beg them for help. Each colony needs to eventually grow a cerabrate, to dictate its actions as necessary. ALso, see if the Terrans are willing to help us in our technological endevours, sending schematics, or anything really. >> Anonymous Drunk 03/23/11(Wed)23:25 No.14345346 Sounds good to me I vote for this, also tell sis to run if she can, this is our mistake and well be damned if doom's too! >> Lord Highlander !TE9R7xCZ.g 03/23/11(Wed)23:26 No.14345350 peace with kingston >"Run, sister!": 1 This too >Spy on VoidGate's swampy home: 1 if by spying you mean bring in any amount of weaponry from any and every potential ally, then yes. ALso blowing them up. repeatedly if neccesary. >> Anonymous 03/23/11(Wed)23:28 No.14345374 Tell Kerrigan to run, that it's not safe, and we'll direct all attention towards us so she can escape. Spy on VoidGate's known homeplanet, in preparation for an invasion led by Warbrate. No Peace...with Kingston. Tarsonis will still burn, but it'll have to be put on hold for a while. Gorn is to attack the VoidGate ships at Ash'Arak with a full cloaked nuclear strike and with conventional Zerg forces; we must not allow VoidGate to finish downloading the Protoss Science data, or risk our ships getting hacked. >> Anonymous 03/23/11(Wed)23:28 No.14345381 Spy on voidgate Tell Kerrigan to cheese it the hell out of there. >> Lord Highlander !TE9R7xCZ.g 03/23/11(Wed)23:28 No.14345383 attack both at Asrack and all other known voidgate territories. >> Anonymous 03/23/11(Wed)23:30 No.14345404 We have to concentrate on Ash'Arack and Dagobah; we don't have the resources to be able to strike all at once. Ash'Arack is full of not only VoidGate's ships, but also Protoss defense forces as well. >> Lord Highlander !TE9R7xCZ.g 03/23/11(Wed)23:32 No.14345429 ..then if any of Bernies forces are still there, we need to wipe them out, and go after VoidGate. Why? to make it look like we are helping the protosss, which incidentaly we are. >> TUCAMP 03/23/11(Wed)23:34 No.14345457 Now exactly a truce with Kingston, just uses ZergTV to send everything we know about VoidGate that doesn't involve us. Inform Kerri about this development, if she wants to leave wherever the hell she is she will, and us telling her to will not make her leave. And send dropservers to dagoba and the worlds that VoidGate has nuked to see what's going on. And how long would it take to scramble enough forces to Ash'Arak to destroy all of VoidGates forces there? I ask because he managed to get all of the unit schematics rather quickly, so getting all of the science data might not take too long. >> Lord Highlander !TE9R7xCZ.g 03/23/11(Wed)23:47 No.14345563 Is it just me? I keep getting this tingle in my spine... like we are all royally screwed, and maybe befriending skynet was not such a good idea... >> Anonymous 03/23/11(Wed)23:50 No.14345600 Oh come on, you really didn't expect this to happen eventually? This was a glaringly obvious and inevitable betrayal. It doesn't take much for malevolent AI's to go from "Kill All Humans" to "Kill All Organics". >> TUCAMP 03/23/11(Wed)23:51 No.14345610 Yes, well most people thought that we should just tell it the time instead of exploring the compound and analyzing everything we find. >> Anonymous Drunk 03/23/11(Wed)23:53 No.14345629 It's not just you am getting it too. We have to call a truce with Kingstion, why because to quote a famous earth leader"United we stand dived we fall" We killed too many Terrains to ever make peace with them in this century, and they too many of us, but in order to survive we fight together if there is to be a tomorrow ! >> Lord Highlander !TE9R7xCZ.g 03/23/11(Wed)23:53 No.14345632 ALl I knew was that it was a terminator facility. I don't remember which thread it was, or even how I thought at the time. >> Lord Highlander !TE9R7xCZ.g 03/23/11(Wed)23:54 No.14345644 We can blame it on someone else. Like Bernie. Or some random non-existent zerg. And yes. Peace may be interesting. if it happens. >> Anonymous 03/23/11(Wed)23:57 No.14345670 This blithe trust in temporary allies will only get us killed. Kingston would use the truce to locate our true core world coordinates, and as soon as VoidGate is sufficiently weakened, we would find hundreds of nuclear missiles headed towards Xenta and Yoshus. >> Cerebrate Anon 03/23/11(Wed)23:58 No.14345684 (Sorry for the delay. Brother and I are laughing and having a good time) Artisanlord is all too happy to use its ZergTV for public safety announcements in the name of interspecial cooperation. A special edition of 60 Mutas airs on the terrible mechanical menace, confirming its role in the nuclear holocaust of the Fringe colonies. Artisanlord goes so far as to suggest that Terrans and Zerg set aside their differences. It doesn't actually petition for a truce, but it hints pretty heavily. Meanwhile, we warn Kerrigan to lay low. The Protoss are not safe, and neither is she. She grumbles, but thanks us for the warning. We send two dropservers out to scout. The first goes to Dagobah. The planet is...notably different than the last time we saw its surface. It is putting off enormous amounts of heat in the form of steam shooting out from massive vents from underground. Water appears to be channeled down for cooling. There are massive communications relays across the surface. There is also a truly frightful fleet of ships. >> Lord Highlander !TE9R7xCZ.g 03/23/11(Wed)23:59 No.14345689 Unless we are not really there. ANd maybe, just maybe, we can set up a... puppet regime? Not sure. But if Kingston see's VoidGate as a better tool to keep his population in check than the zerg, and finds it may be a good idea to play nice with the zerg, well... Lets hope. And pray. To the OVermind. >> Anonymous 03/24/11(Thu)00:00 No.14345700 Also, we needed a very plausible huge threat for the next major antagonists for the quest. Kingston's fleet is all shot to hell and we just wrecked the crap outta his scientific capabilities with our attack on the Tyrador system. And discredited him on public television. The Protoss are still reeling from the Overmind's initial foray onto Aiur, and have similarly suffered with the incineration of Shakuras. The Dark Templar have all but been completely wiped off the face of the Korpulu Sector. Dyles is probably fused to the surface of that one planet we glassed. The UED fleet is probably en route... but will probably not arrive anytime soon due to either expanding its size into a full-fledged victory fleet or because we're simply not at that corresponding point in time that they arrived in Brood Wars. >> Lord Highlander !TE9R7xCZ.g 03/24/11(Thu)00:01 No.14345716 WEll, can we get the broadcast to show the fleet and planet? WE need to blow it up. Can we... Can we move the zerg planet somehow? and crash the two together? >> TUCAMP 03/24/11(Thu)00:02 No.14345720 Thread 26, apparently. I rambled about the Black Company for a bit, then paranoia about Daleks, then paranoia about fighting skynet. Good times >> Anonymous 03/24/11(Thu)00:04 No.14345730 We can still do this. VoidGate hasn't had time to implement the cloaking technology, or the advanced sensor systems needed to detect cloaked units. A fleet of cloaked battlecruisers, carriers, and valkyries might still be able to fight off VoidGate's defense fleet and glass the planet. Have the Dropserver immediately begin scanning to see if VoidGate has the necessary detection hardware to penetrate cloaking fields. >> Anonymous 03/24/11(Thu)00:05 No.14345749 >Can we move the zerg planet somehow? What are you talking about? Have you already forgotten? It's still in orbit around AIUR! >> Lord Highlander !TE9R7xCZ.g 03/24/11(Thu)00:06 No.14345755 I know. We need to ask the protoss if we can attack a threat (allude to bernie) with the sucker. Beg them if need be. >> TUCAMP 03/24/11(Thu)00:06 No.14345756 Actually we're long past the start of BW. Granted Blizzard has no understanding of time, and makes things happen far to close together. >> Cerebrate Anon 03/24/11(Thu)00:08 No.14345778 The second goes to Ash'Arak. It finds the Protoss ships repairing while transmissions pass between VoidGate's fleet and the planet's highest-ranking Adjudicator. From the sound of it, the Protoss are not yet aware that their computer systems have been compromised, though they may not have quite taken the bait about the prophecy. From the amount of background noise, we're sure that VoidGate is still downloading the science data. In a moment of cleverness, we use the dropserver to create a bit of feedback. Not enough to reveal its location, mind you, but enough to alert the Protoss that their systems have been breached. The communications channels get substantially more hostile as the Protoss attempt to set their ships to a combat stance, but suddenly fall silent as reactors, engines, and life support systems fall quiet across the system. Tactically, we must make a decision: Should we split our forces, or attack just one world? And if one world, which one? >> Anonymous 03/24/11(Thu)00:09 No.14345791 You think the Conclave would allow Zerg to again sully the sanctity of the holy space around Aiur? Not only that, but the resources required to move such a mass through Warp transit would mean that all other production would have to stop. Your plan is unfeasible. >> Lord Highlander !TE9R7xCZ.g 03/24/11(Thu)00:10 No.14345801 Attack The Homeworld. And begin infestation, or at least reveal our presence to the protoss by landing on the world, and giving them weapons. Also we need to change any factories to biological processes, not mechanical. For the good of all, we cannot fall silent. >> Anonymous Drunk 03/24/11(Thu)00:11 No.14345807 Send everything (zerg) we have to Ash'Arak it's time this war got started! >> Lord Highlander !TE9R7xCZ.g 03/24/11(Thu)00:11 No.14345809 come up with a better one. I am going to bed. Good night all. >> Anonymous 03/24/11(Thu)00:11 No.14345810 Do we still detect VoidGate downloading data from the Protoss databanks? If Yes, Ash'Arak. We need to prevent it from getting more science data. If No, then Dagobah. We must strike as swiftly as we ca
1
Negative
true
<urn:uuid:ed4ceead-7042-412a-991b-1d4f160062e5>
dclm-dedup-25B-ai-scifi-docs | https://huddle.eurostarsoftwaretesting.com/software-testing-and-futurology/
Futurology and the Future of Software Development Reading Time: 10 minutes In this blog you’ll read how the study of Futurology can help inform the software development and software testing community on upcoming trends and technological changes which may have an impact on our industry and career. Software Development, testing and futurology • Introduction • What is Futurology? • Possible, Preferable, Probable futures • Futurology and Science Fiction • Future trends – success criteria • Example predictions: Voice Recognition, Currency Convergence, IoT • Testing IoT – testing SOA projects • Can we afford to ignore Futurology? Flying cars and touchscreen technology. Mobile phones and space elevators. Examples of some of the technologies either with us or perhaps part of our future. But how can we predict which trends will come to pass and which will fall by the wayside? Do you feel as though you have a good understanding of upcoming trends and the impact they may have on your workplace? On your career? Within technology some trends are highly predictable, and may even be described by hard and fast models such as Moore’s law which states that processor speed of computers will double every two years. But no mathematical model could predict the rise of the likes of Facebook, Youtube, Twitter, WhatsApp, Instagram etc. and the impact they would have on our social and professional lives. As professional software testers some of the questions we should ask ourselves are: • How prepared are we for rapid technological innovations and trends? • Do we know what is coming next? • What courses or training should we do? • How can we continue to provide a professional Quality Assurance service in the face of all these changes? Many large corporations use ‘futurists’ as part of their risk management strategy, for ‘horizon scanning’ and ‘emerging issues analysis’. As testers should we be taking a similar view in order to progress our careers and ensure we are in the best possible position to continue to provide quality testing? In this short article I am going to take a look at the study of Futurology and the part it can play in to helping software testing remain relevant in the face of ongoing change. What is Futurology? Futurology attempts to gain a holistic view of potential futures based on insights from a range of different disciplines such as science, art, economics, philosophy, sociology, behavioural studies and engineering. Considered speculative by its nature, often exploring the boundaries of human imagination, Futurology has nonetheless become a respected branch of academia. 1Notable futurists include celebrity physicist Michu Kaku, Ray Kurzweil the Director of engineering at Google and most famously the author and thinker H G wells who some consider the father of Futurology. In most cases Futurology tends to focus on the long term and is less concerned with short term prediction. However, due to the increasing speed of change Futurology does in fact have a lot to say about the areas and industries we may find ourselves involved in testing over the next few years. So how reliable is Futurology? Well, broadly speaking there is considered to be three different categories of futures: • Possible • Preferable • Probable Let’s look at these three broad categories and consider how reliable these predictions really are. Possible Futures 2For example a space elevator capable of lifting payloads into orbit, which may sound far-fetched but Japan’s giant construction company, Obayashi Corporation, has announced that it plans to build an elevator into space by the year 2050, using nano-enhanced materials such as carbon and graphite. From a software testing point of view as fun as Possible future thinking is, it probably has less relevance to our own industry or at least to our careers in the next 5-10 years. Preferable Futures Preferable futures are concerned with what we “want to” happen; in other words, these futures are largely emotional rather than cognitive. They derive from value judgements, and are more overtly subjective than the previous two classes. Because values differ so markedly between people, this class of futures is quite varied. 3The Apollo moon landing is an example of a Preferable future that was realised. Carbon capture is a modern day example – perhaps not quite as eye-catching as a moon landing but arguably of far more value. Probable Futures Probable futures are futures considered “likely to happen” and stems in part from the continuance of current trends. Probable futures are considered more likely than Possible or Preferable. However, trends may fade out suddenly, while new ones may emerge unexpectedly. Mass produced self-driving cars are a good example of Probable futures. We already have self-driving Google cars alongside planned trials of similar cars in Greenwich and Bristol. Futurology and Science fiction Of the three categories it is the Probable futures we should follow most closely as these futures are considered the most predictable. But which Probable futures are most likely to have an impact on our industry over the coming years? For this it’s worth returning to one of the founders of Futurology, H G Wells, and the genre of Science Fiction. 4HG Wells made some successful predictions such as the dominance of the car and the move to the suburbs but also some less successful ones around flight and submarines. Like Mr Wells if you want have fun thinking about where technology may take us you can do worse than considering the realms of science fiction for inspiration. Science fiction presents us with a mixed bag of predictions, some entirely wrong, others prescient, some inspiring progenitors of the future. 5Star Trek: The 1960s Star Trek series has been particularly influential, having been credited with popularising automatic sliding doors (probable), mobile phones that flip open (preferable), and the warp drive (possible). Back to The Future II: Many other science fiction films have been more hit and miss. The scene from Back to The Future II when the flying car arrives in 2015 is an example of a preferable future that is based more on desire and emotion than any realism. The truth is flying cars are not that practical for a variety of reasons, not least landing space and negotiating other personal air vehicles (PAV) whereas the probable future of a world of driverless cars is looking far more likely. Total Recall: Arguably the virtual reality that appears in films like Total Recall is practically here already, at least in terms of a potential lifestyle choice. There are a plethora of virtual worlds that millions inhabits on their devices every day. Minority Report: featured some technologies that certainly seem prescient. The film has been credited with a number of Probable future predictions from biometrics to touchscreens to pre-crime and face-matching. Future Trends – Success Criteria There are number of keys to adoption which see some predictions take off and others never to leave the ruminations of futurologists. One key to adoption is whether a prediction is compelling enough to become self-fulfilling – inspiring inventors to implement it? Star Treks sliding doors for instance Possibly the most important factor is social forces as technology alone does not shape the future. Futurist predictions which bring people together through social, cultural, economic and technology connectivity have a strong chance of success. Unexpected social factors meant that mobile phones, strikingly absent from much early 20th-century science fiction, spread faster than even technologists predicted. Conversely, in some cases social forces can hamper its adoption. For instance, the advance of genetic engineering has been held up as much by social objections as technical ones. By applying known success criteria can we second guess which other areas of technology we should pay attention to? Let’s look at three innovations that are examples of futurologist predictions that have become or are soon to become the here and now. Voice recognition You walk into your home after a hard day at the office and give the following commands ‘Lights, television, cook meal.’ Everyone’s favourite fantasy from a myriad of science fiction films. Will we shortly reach a point where physical interaction with our various devices via touchscreen, keyboard or mouse will eventually be replaced by voice commands or will they continue to be complementary? 6Right now when you speak to Siri or Google the results can be unreliable so we probably wouldn’t trust voice recognition technology in an automated car or a nuclear power stations. But what would increasing voice recognition technology mean for testers? Will we need elocution lessons? Or on the contrary should we imitate as many regional and international accents as possible. We would certainly need to consult our dictionaries to find similar sounding words in order to test all the boundary conditions. Personally I don’t know of any specialist voice testers but I can only assume they are out there and their work is likely to increase over the coming years. Convergence of currencies Another Possible future is a single global currency with something like the Euro or the Dollar extending its power across the globe. There already exists a ‘Single Global Currency Association’ dedicated to the goal of implementing a Global Monetary Union managed by a Global Central Bank by 2024. A single global currency would have an impact on financial services and the myriad of supporting systems and beyond that any e-commerce, transactional systems or any systems which hold financial data. Alongside traditional manual testing, the likes of security and performance are key to testing these potential changes to financial transactions. Certainly a trend worth watching if you wanted to steer your career into the world of finance. ‘The Internet of Things’ (also known as Machine2Machine or M2M which worryingly produces echoes of SkyNet and the Terminator series – a science fiction prediction none of us want to see!) is an expanding network of interconnected internet-enabled devices. Some suggest that by 2020 there will be 50bn IoT devices all talking to each other on a constant basis and unlike our handheld devices like PCs, tablets and smartphones and watches the invisibility of IoT will find a way to weave itself into the fabric of everyday life. 7The applications of IoT are only limited by the imagination. We already have connected thermostats and smoke detectors.  Apple for example have introduced Homekit, an IoT platform that allows the unlocking of doors, turn on and off of lights via iPhone. Phillips Hue LED light bulbs connect via wi-fi, flashing when the front door is knocked or you get an email. All controlled from your phone. Haier touchscreen smart fridges provide a recipe based on its contents, notifies you of out of date items, and updates shopping lists. And how about IoT enabled lavatory which checks your urine for pregnancy, or your stools for bacterial infection! Of the three example predictions the upward trajectory and viability of IoT seems overwhelmingly compelling. However, it is still very much about collecting data, whether it’s your body fat or what’s in your fridge, then using that data to improve and manage your life, and some question if we want to turn our lives into a ‘temple of surveillance’. Do we want every aspect of our lives tracked in a post Snowden world? Will these concerns put a halt or slow-down in the adoption of IoT. The continued popularity of Facebook would suggest not. Testing IoT – testing on SOA projects? For anyone who has worked on any Service Oriented Architecture projects many of the principles around testing SOA can be applied to testing IoT. Middleware in SOA is referred to as software that facilitates exchange of data between two or more application programs within the same environment or different heterogeneous environments. The middle ware addresses the transformation and routing of messages to applications that manage authentication, authorization, and governance. This is essentially the underlying principle behind IoT. When performing SOA testing, the same planning, designing, execution and reporting of test cases is followed, but testers need to put an additional effort into configuring or simulating required test environments, creating test harnesses, mocking etc. due to the complex nature of SOA solutions. The inclusion of different types of products that can be integrated also produces a larger set of testing combinations. Frequently there is no GUI to interact with when testing on an integration level. But end user testing is likely to involve a plethora of devices leading to an increase in the size of our test labs. Tools such as SoapUI can be used for web services testing and to help to understand and interrogate messages. Other tools to consider for performance and functional testing include Apache Jmeter and CURL, a command line tool that is used to send or retrieve information with the use of URL syntax. Security will be a very important stream of our testing. What would happen in the event that our devices are hacked by someone with the ability to turn off our water supplies or unlock our doors? In the world of IoT how do we know that our test does not turn off the lights at a nuclear plant? Tools such as Burp provide a variety of advanced features that can be used for security testing purposes. Testers involved in mobile testing and app testing have already moved with the technological changes and trends. Could specialising in testing SOA based projects be a career option for you to consider? At the very least we should add the testing skills required to our toolkit. Can we afford to Ignore Futurology? I first considered writing an article on Futurology and testing over a year ago and during that time many of the trends and observations I read in various articles have become the here and now. I would suggest we should all follow technological and social trends. Who’s to say you can’t become an expert tester in a particular area, becoming in demand for your expertise, because you identified a trend, re-trained and skilled up. Many people unconsciously study trends for their children’s career encouraging our young ones to learn Cantonese, study water conservation or food management, train to become electric car mechanics. Shouldn’t you apply the same thinking to your career? In a final tip of the hat to the father figure of Futurology, in a 1933 BBC broadcast H G Wells called for the establishment of “Departments and Professors of Foresight”. The software testing community would do well to consider such a department for our own field of work…. About the Author Find out more about @johnreber 2 Responses to “Futurology and the Future of Software Development” 1. Hi, Thanks for sharing a valuable and futuristic post!! Yes, we should put on thinking-hat to understand the transformation of technology and trends, which could help us in identifying what all the Possible, Preferable, and Probable Futures. With the advent of IoT Internet of things, testing Iot is something looks like a Possible Future and this might be the trend going to stay for a long time. The line — ‘I would suggest we should all follow technological and social trends. Who’s to say you can’t become an expert tester in a particular area, becoming in demand for your expertise, because you identified a trend, re-trained and skilled up’ — is a right point. We cannot afford to ignore Futurology. The testing community should follow and adapt to changing technological and social trends to improve testing shape and pace to be in line. Leave a Reply
1
Negative
true
<urn:uuid:fcf59379-4e3f-4ca0-848b-00b03fe569e6>
dclm-dedup-25B-ai-scifi-docs | https://fuldapocalypsefiction.com/tag/2020s/
Review: Tales of World War III 1985 Tales of World War III: 1985 Looking back at the progression of this blog, I’m reminded a lot of the story of trying to make a cockpit design that could fit the “average pilot”, and then finding that no one actually met that criteria. I feel similarly when I look back at just how little anything actually met my stereotype. Brad Smith’s Tales of World War III: 1985 series comes closest, edging out Larry Bond’s earlier work. It’s done by a wargame designer and thus features the wargame-friendly setting of 1985 Europe, with battles taking place in various parts of it. There’s a lot of technical description. I don’t feel nearly as much negativity towards it as I would have and did in the past. Smith has sincerely tried to build characterization, even if the execution is still often clunky and the characters often Steel Panthers cameras in practice. And the wargaming at least takes the series above Ian Slater in terms of technical accuracy. But it’s still a 51% entry in a niche genre, the pilot who isn’t particularly good or bad but has the dimensions to actually fit well in the “average” cockpit. Review: Firedrake As of now the most recent book in the Kirov series, Firedrake combines the worst and best of it all. The worst is that all of the structural problems are still there and that the wargame lets play structure is beginning to wear thin, especially with the foreshadowing that this particular timeline is getting erased/destroyed. The best is, well…. The best is that plot points involve the ship transported to a bleak, dark, empty world where only hostile mechanized drones roam the seas and skies (was it Skynet or Yawgmoth that was responsible? :p ). Then it moves into another timeline courtesy of supervillain Ivan Volkov, where a Third World War (the fourth in the series) is about to begin, one where Japan won the second thanks to Volkov giving them nuclear bombs. The Kirov series is best when it wargames out drastically changed situations, and that is the case in half of Firedrake. The other half is wargame action as usual. Review: The Secret Weapon The Secret Weapon A thriller in the Alexander King series, Bradley Wright’s The Secret Weapon is an example of how tough it is at the margins. My history with the author is a little strange. I’d read some of his books in the past, where they faded from memory as bland and mediocre. Then I saw this book, felt it was bland and mediocre-and then realized I’d read the same author before. Anyway, the book isn’t really the worst ever. On-paper, it does what a cheap thriller is supposed to do, and only feels like its slightly below average in every category that matters-the action is slightly less exciting, the pacing slightly less efficient, and so on. Yet it’s that little bit that makes the difference. Because the “action hero” genre is so big, has so many choices, and is reliant on execution rather than concept, for something to fall behind somewhat means there’s a lot out there that’s better. This isn’t like the much tinier “big war thriller” genre where a flawed entry like Chieftains or Arc Light can still be conceptually interesting enough to recommend. Instead, its flaws means it sadly misses the cut. Review: ParaMilitary Action-Adventure Fiction, A History Paramilitary Action-Adventure Fiction: A History Review: Hitler Invades The United States Hitler Invades The United States I’d thought that Hitler Invades The United States was going to be just a dime-a-dozen work of internet alternate history. I was mostly right, but one part of it was a lot weirder than I thought, and that part both makes it lower quality and more interesting to write about. The point of divergence is that the Germans delay their campaign so that the wunderwaffe can be ready. With said wunderwaffe, they easily take over the USSR. Then they launch a successful and “essentially unopposed” (exact words) invasion of Britain. Then it comes time to-guess. It’s not exactly the most plausible or original World War II alternate history story out there. The story itself is a clunky mess of what you’d probably get if you only looked at the most shallow and popular sources of World War II, and then decided that an American invasion had to happen. The Axis invades the US and is only stopped when the Americans drop a nuke on Nuremberg-at which point the war ends instantly. But the real “star” is the conference room. Barring a few “recollections” from participants, this book is entirely conferences and meetings, written in a fashion that feels like the script for a stage play. I know this genre has a lot of conference room scenes, but this takes the cake. And they’re not even done well. What I think makes this interesting to look at is how something like this compares with Robert Conroy, who also wrote technically inaccurate alternate history tales of the US getting invaded. At least with Conroy you had a proper novel, even with often-subpar execution.  Here, it’s the kind of “semi-exposition” alternate history that in theory should be used in places where a normal narrative wouldn’t work, but in practice I feel is used likely because it’s easy to write. What this has in conclusion is the negative elements of two types of alternate history “traditions” (as per Alexander Wallace’s excellent article) amplified at the same time. One perceived negative element of the “print tradition” is that it’s less plausible and tends to focus on the most visible and obvious trends, like the American Civil War or World War II. This is that. However, it also has the issues with storytelling (and then some) that the “internet tradition” can have. The result is the worst of both. Review: Ward Box Press released Box Press, my second Smithtown Unit ebook, has been released by Sea Lion Press. While the first installment  aimed to pay homage to the “men’s adventure” genre in all its forms, this one has a narrower and more obscure foundation. That would be the weirder books in the 1970s that tried to move beyond just shooting mobsters and brought in stranger antagonists as a result. Enjoy the next adventure of Bill Morgan. Review: Condition Zebra Condition Zebra The (supposedly) final arc of John Schettler’s epic Kirov series, now exceeding even The Subspace Emissary’s Worlds Conquest in terms of word count, begins with Condition Zebra. This is the story of a contemporary World War III, after the Kirov timeshifted away from another contemporary World War III, possibly making it the first series to have multiple World War IIIs in it. Having read two books in the 49-and-counting series, I figured it would at least be the same as before.  I was wrong. It somehow managed to be worse. And it manages to be worse in ways that might be considered contradictory at first glance. One one end, the basic nuts and bolts writing has all the problems of the past Kirov books (characters who exist solely to operate military equipment, rote technical descriptions of the battles that give away the wargames used to sim them, and overall clunkiness) and just feels sloppier, with the dialogue, grammar, and even structure seeming worse. On the other, the giant timeline tangles get bigger, more confusing, and somehow more pointless-seeming than ever. Knowing the end result and purpose, it just feels like the developers of Madden, 2K, or The Show making up a gigantic plot about time machines and time-traveling team general managers to explain why past players can appear in a game with current stars. Maybe it’s those above flaws and maybe it’s just that the novelty of seeing the world’s longest wargame let’s play has worn off after three books, but this doesn’t even seem bad in a bemusing way anymore. It’s just bad. Review: Divided Armies Divided Armies
1
Negative
true
<urn:uuid:efd8627e-f192-4d4d-8f27-2f0aff1248f7>
dclm-dedup-25B-ai-scifi-docs | https://www.mtv.com/news/dku0n3/craziest-cosplays
The 21 Most Ridiculously Jaw-Dropping Cosplays Ever Are we sure these aren't the real characters come to life? Attending nerd conventions can be really disorienting sometimes, in no small part because of just how many amazing and hilariously creative costumes you'll see (and accidentally bump into, if they're bulky enough!) literally everywhere you look. Seriously, the things that cosplayers are able to do with fabric, paint, modeling plastic, and lots of human ingenuity are simply astounding -- heck, sometimes they look even better than the official outfits we see on TV and in the movies! While cosplay isn't a new pasttime -- it's been around since the very first science fiction convention in the late 1930s -- today's amateur costume designers have REALLY kicked things up a notch. In no particular order, here are some of the best, funniest, most elaborate, most AWESOME cosplays we've ever seen -- and rest assured, we'll see plenty more at San Diego Comic-Con this year! Deadmau5 Spider-Men Heather Paul on Flickr Featuring Peter Parker And Mau5 Morales. Get it? Jooookes. This guy really outdid himself in recreating the iconic monster from "Teenage Mutant Ninja Turtles." For serious. Admiral Ackbar Adam Savage of "Mythbusters" fame has done some truly outrageous cosplays over the years, but nothing was as incredible as his prosthetic-heavy Admiral Ackbar from "Return of The Jedi." You can find out how he created the costume here on his "Tested" web series. Mohawk Storm Jamais Vu Marika of Kicka Custom Designs and Suicide Girls model Kurosune (Jacqueline-Elizabeth Cottrell) came together to deliver us the most amazing Punk '80s Storm to ever exist. Boba Fett Samurai Leonard Lee for MTV Geek "Star Wars" is basically a send up of Kurosawa films anyway, so this cosplayer just took it an extra step further. Enayla Cosplay/Darkrain Multimedia hanging glados Most cosplayers who attempt to portray the snarky evil robot from the “Portal” games do so by creating a humanoid persona for her. Not Enayla of Enayla Cosplay -- she literally hung upside down in a harness for hours while attending PAX Prime to recreate GLaDOS properly. That's commitment. Sailor Bacon MonsieurX on imgur sailor bacon An invented (and wonderfully disturbing) character created and cosplayed by webcomic artist Lar de Souza. Every time he goes out and about in the outfit, he does so to raise money for MS research. The Shiva Sisters Melting Mirror shiva sisters I can't decide whether I'm more impressed by the costumes of these amazing "Final Fantasy XIII" cosplayers (Melting Mirror and Kudrel), or their ability to stand upright with those giant headpieces. Seriously, how do you not tip over like a giant? The Sanderson Sisters The Joker (Plus Harley Quinn) Harley's Joker joker harley Anthony Misiano IS the Joker. He just is. End of story. To-Scale Rocket Raccoon All the kids in this adorable series of cosplay interviews are precious, but li'l Rocket here really takes the cake. (And bonus props to Kendra Pettis as Gwendoline from "Saga.") Sexy Chewbacca L. on Flickr It was only a matter of time, really. Nyima-chan on Deviantart I don't know how you even come up with the idea of cosplaying as a literal wormhole in space and time, but WOW this cosplayer did it. Mad props. The Wacky Waving Inflatable Arm Flailing Tube Man I just... I have no words. Hiccup And A Nader Dragon Loren Javier on Flickr That's not fair, this guy CLEARLY owns a real live dragon that he trained (just like in the movie!) and brought to SDCC. M.O.D.O.K. (and A.I.M. helpers) Seriously, are you kidding me with this, people? It looks like you LITERALLY created a Mechanical Organism Designed Only For Killing (from the old "Captain America" comics, for those of us who aren't giant dorks). HOW. I must know your secrets. Bumblebee the Transformer Extreme Costumes There is a person in this costume. The guy who made it is named Thomas DePetrillo, and he is clearly a "Transformers" wizard. Running of the Hoods As we reported on after Star Wars Celebration this year, it's become popular for fans to dress up together as Wilrood Hood, a super-minor character in the background of a scene in "Empire Strikes Back." That's commitment, folks. Hayley Sargent on Flickr Look, Marvel's Destoyer of Worlds even has a mini Silver Surfer on his shoulder! The TARDIS Console Scott Pham on Flickr Everybody has a costume of the Doctor or of the TARDIS exterior, but who ever does the INTERIOR? This cosplayer, that's who. Respect. Tom Hiddleston's Loki Getty Images Marvel Studios Panel At Comic-Con Does it count as cosplay when the actor who actually PLAYS the character in the movies dresses up as that character during a convention? We say yes -- if only because of how nonsense it was when Tom Hiddleston did it himself at SDCC one year for Marvel's Hall H panel. Well done, Hiddles.
1
Negative
true
4226fab6-c08e-49e0-ab5e-b3592cb879db
alignment-classifier-documents-unlabeled | trentmkelly/LessWrong-43k
Selection Effects in estimates of Global Catastrophic Risk Here's a poser that occurred to us over the summer, and one that we couldn't really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year).  It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war.  It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?
1
Negative
true
<urn:uuid:81c2c57e-7d1b-4148-8765-c8e29f122b20>
dclm-dedup-25B-ai-scifi-docs | https://www.moviemistakes.com/film1387/questions
Wayne's World Question: During one scene in the Mirthmobile, Wayne pulls up alongside another driver, winds down the window and says "Pardon me, do you have any gray poopah?" and then begins to laugh. What's he talking about? Answer: There was a series of commercials for a mustard called Grey Poupon in the late 80s and early 90s. It usually involved someone driving up next to a limo in a cheaper car, rolling their window down and asking them "pardon me sir, do you have any Grey Poupon?". Then the richer man in the limo would say "certainly" and pass it to the guy in the other car. They were incredibly stupid commercials and wayne's world makes fun of them. Question: Can anyone explain to me the scene in which Wayne is pulled over by a cop and shown a picture? I know it's supposed to be funny, but I don't get the joke. Answer: You presumably haven't seen Terminator 2. The actor playing the cop is Robert Patrick, who played the shape-shifting T-1000 in that movie, who masqueraded as a policeman for most of the film. The T-1000's mission was to track down and terminate a boy - hence the "have you seen this boy" line that he says to Wayne. Tailkinker Premium member Question: It seems like there's some stuff cut out of the scene where Wayne and Garth meet the cop outside the donut shop. He stops them for some unsaid reason, then flips a coin at the end. What does any of this mean? Does he just get charmed and distracted by them? Answer: It was just establishing that Wayne and his crew were friendly with Officer Koharski. They cut their conversation short because they were afraid Phil was about to throw up and needed to get him a cup of coffee. Brian Katcher Before Phil gets sick, they're just having a conversation. After it cuts from Phil, Koharski is suddenly holding a quarter in between his fingers. There seems to be a scene cut before they see Phil get sick to explain where the quarter came from.) Like if I had to guess, does Koharski pull out the quarter to show what he found in the cavity search?) That's what the question is asking, not is there a scene cut after Phil gets sick to explain why Koharski just walks away. More mistakes in Wayne's World More quotes from Wayne's World Trivia: The police officer in Stan Makita's Donut Shop is named Koharski, in reference to NHL Referee Don Koharksi. After poorly officiating game 3 of the 1988 playoff series between the Devils and Bruins, Devils coach Jim Shoenfeld and Koharsky argued on the way to the dressing rooms. Koharsky fell and told Shoenfeld he'd be suspended for pushing him, and that it was all on film. Shoenfeld said, "Good, cause you tripped and fell you fat pig, have another donut.". All this was shown on ESPN. More trivia for Wayne's World Join the mailing list
1
Negative
true
6da78b24-425c-4168-9233-b8037b735976
alignment-classifier-documents-unlabeled | StampyAI/alignment-research-dataset/aisafety.info
Aren't AI existential risk concerns just an example of Pascal's mugging? <iframe src="https://www.youtube.com/embed/JRuNA2eK7w0" title="Is AI Safety a Pascal's Mugging?" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> [Pascal’s mugging](https://nickbostrom.com/papers/pascal.pdf) gets its name from [Pascal's wager](https://en.wikipedia.org/wiki/Pascal%27s_wager), a classic philosophical argument that one should believe in God regardless of how likely it is that God exists, because the rewards for believing, if God *does* turn out to exist, are infinite. The mugging can be seen as a finite version of the wager and aims to show that an unbounded [expected utility (EU) maximizer](https://arbital.com/p/expected_utility_agent/) is vulnerable to manipulation by a “Pascal’s mugger” who promises enormous rewards in exchange for e.g. a wallet. It may be very unlikely that the mugger will deliver, but our [formalisms for calculating probabilities](https://arbital.greaterwrong.com/p/solomonoff_induction/) say the utility of the rewards can go up faster than their probability goes down—promising 10^100 times as much utility makes the offer less credible, but not 10^100 times less credible. That means the mugger’s offer seems worthwhile to the EU maximizer, despite being silly from a common sense perspective. Some claim that taking actions to reduce AI existential risk is like being mugged in this way, because the risks are unproven and somewhat speculative, but potentially enormous in scope, threatening not only the death of eight billion people in the present, but also "[astronomical waste](https://nickbostrom.com/astronomical/waste)", i.e. destroying all the value that could eventually be created using galaxies worth of resources. However, calling the arguments for reducing AI existential risk a Pascal’s mugging mischaracterizes them. They generally *don’t* say that we should ignore the improbability of disaster because the cost of disaster would be so high. Rather, they say that disaster is actually quite [plausible](/?state=6953&question=Do%20people%20seriously%20worry%20about%20existential%20risk%20from%20AI%3F), and maybe even [probable](/?state=7715&question=What%20is%20the%20probability%20of%20extinction%20from%20misaligned%20superintelligence%3F).[^kix.8xs5svnmh7dm] Humanity being wiped out by a new technology that is smarter than us and that we’re not sure how to control is not the kind of event we should be extremely surprised by (in the way that we'd be extremely surprised if a Pascal's mugger's claims turned out to be true). [^kix.8xs5svnmh7dm]: There might be a good reason X to think existential catastrophe from AI is extremely improbable. If so, then maybe a Pascalian argument would be the only way to justify caring. But if the importance of AI existential risk isn’t actually justified in Pascalian terms, it’s [unproductive](https://www.lesswrong.com/posts/srge9MCLHSiwzaX6r/logical-rudeness) to make Pascal’s wager or mugging the focus of criticism. Rather, the focus should be the reason X.
1
Negative
true
<urn:uuid:dcaf15fb-cb56-46bb-911a-0cccf0db17d7>
dclm-dedup-25B-ai-scifi-docs | https://www.ryanhawbeckersmoviereviews.com/2019/
Friday, December 20, 2019  Star Wars Episode IX: The Rise of Skywalker is directed by J. J. Abrams and stars Carrie Fisher, Mark Hamill, Adam Driver, Daisy Ridley, John Boyega, Oscar Issac, Anthony Daniels, Naomi Ackie, Domhall Gleeson, Richard E. Grant, Lupita Nyong'o, Keri Russell, Joonas Suotamo, Kelly Marie Tran, Ian McDiarmid, and Billy Dee Williams. The plot of the movie is the surviving Resistance faces the First Order once more in the final chapter of the Skywalker Saga.    I loved this movie. I think this is a great and super satisfying way to conclude not only this new trilogy but the Skywalker Saga as a whole. J. J. Abrams nails it I really thought he brought it home in a great fashion. I don't think this is a movie that combats The Last Jedi like a lot of people say it is. I guess for you it is going to depend on how you read certain moments. I thought the choices here were a natural continuation of that story. I don't think this is a trilogy that is fighting itself. The cast once again is fantastic I really do like the character dynamics in this movie. We get a lot more of Finn and Poe and get to see more of their adventures. I like the fact that the team is more cohesive now that they are going on this journey together, and they are not separated like they were in The Last Jedi. It is great to see them all together, and they all have this amazing dynamic like a family at this point.    The continuation of Rey and Kylo Ren is fantastic the way they interweave and cross paths, especially in this one, is very well done. I like the fact that there is a continuation of the force bond that is between them where they are able to talk to each other and be with each other in spirit I loved those scenes. Daisy Ridley and Adam Driver are especially good here for me they are the standouts. I thought they had a lot of the emotional weight to carry, and they pulled it off. There scenes together in this one are easily some of my favorites not only in this movie but the whole trilogy. The thing I was most worried about going into this movie was the return of Emperor Palpatine, but I ended up being very happy with it. The scenes with him were so much fun to watch they were intense and very creepy. Ian McDiarmid is always fun to watch as that character and you can tell he is having a great time being back.    J. J. Abrams had the difficult task of not only concluding this new trilogy story but the Skywalker Saga as a whole. I really felt that he was able to conclude both stories in a way that for me felt really satisfying. I was reminded how attached I am to not only the old characters but the new characters as well. This is a movie that has great humor, it has great hart, it's got great moments from some of our favorite characters, and it has a few surprises that work very well. 5 out of 5 stars for Star Wars Episode IX: The Rise of Skywalker. I hope you enjoyed my review. Tuesday, December 3, 2019  Knives out is directed by Rian Johnson and stars Daniel Craig, Chris Evans, Ana De Armas, Jamie Lee Curtis, Michael Shannon, Don Johnson, Toni Collette, Lakeith Stanfield, Katherine Langford, Jaeden Martell, and Christopher Plummer. The plot of the movie is a detective investigates the death of a patriarch of an eccentric, combative family. I love murder mysteries, so I was very excited to see this movie. I was very disappointed by this movie.   The performances are great everyone knows what kind of movie they are making also the cinematography is really beautiful.  Most of the movie is spent with Marta played by Ana de Armas and at first, that was disappointing I wanted to get to know the other characters more, but whenever the other characters are on screen they are extremely annoying. Easily the most annoying character in this movie is Benoit Blanc the detective played by Daniel Craig his southern drawl accent is so over-the-top that I was having a hard time not laughing every time the character was on screen    The biggest problem with the movie is it is just not fun I was able to predict where this movie was going to go very early on, the humor in the movie does not work at all, and the movie is way too long. 2 out of 5 stars for Knives Out. I hope you enjoyed my review. Tuesday, November 26, 2019  Frozen 2 is directed by Chris Buck and Jennifer Lee and stars Idina Menzel, Kristen Bell, Jonathan Groff, and Josh Gad. The plot of the movie is Anna, Elsa, Kristoff, Olaf, and Sven leave Arendelle to travel to an ancient, autumn-bound forest of an enchanted land. They set out to find the origin of Elsa's powers in order to save their kingdom. I loved the first movie so, I went into this with high expectations and it did not disappoint I loved this movie. It is a more adult animated film from Disney it's darker and more mature and that is what I loved about it. There is still stuff that kids will enjoy, but I think adults will have a better time with this one. Of course, if you are a hardcore Disney fan you are going to really enjoy this movie. The music is fantastic, the performances are outstanding, there is some really great humor, and beautiful animation. I don't think any fan of the original is going to be disappointed.    The voice cast is amazing and everyone is great just like in the first film. Josh Gad as Olaf as always is hilarious there is one little monologue that he has which basically recaps the first movie that is one of the funniest things I've seen in a movie this year. Kristen Bell as Anna and Idina Menzel as Elsa have a much more easy-going chemistry in this one since now they are close sisters again and I love their dynamic in this movie. You could really feel the emotion between them and I thought they were perfect in the film and I thought they made their sister relationship feel a lot more lived-in and believable this time around especially since they have grown closer. Jonathan Groff is back as Kristoff, and he is clearly having a good time. He gets a really great song in the middle of the movie.    This movie has a lot of jokes that probably a lot of kids won't get that I thought were really clever, really smart, and genuinely hilarious. Olaf gets some amazing jokes and some of them come so rapid-fire you have to try to stop laughing to keep up with them.    This is a visually gorgeous movie there are some sequences were when you see them you will understand why it took them six years to make this sequel. It's a movie that is as colorful as the characters that inhabit it. This is really Elsa's movie because it is her personal journey to figure out what happened in the past those secrets of the past and it definitely has to do in large part with her relationship with her mother. I liked that that was the emotional through-line of this movie in addition to her relationship with Anna who genuinely loves her and wants to protect her and doesn't want to lose her again. Their relationship is really warm it's really sweet and you can really tell that these two sisters genuinely love each other, that they don't want to be separated, and if they do need to separate it is for the right reason. Those two emotional through lines carry this movie and I think they carry it extremely well.  5 out of 5 stars for Frozen 2. I hope you enjoyed my review.  Frozen is directed by Chris Buck and Jennifer Lee and stars Kristen Bell, Idina Menzel, Jonathan Groff, Josh Gad, and Santino Fontana. The plot of the movie is after the kingdom of Arendelle is cast into eternal winter by the newly-crowned Queen Elsa, her sprightly sister Anna teams up with a rough mountaineer named Kristoff, his trusty reindeer Sven, and a snowman named Olaf to break the icy spell. I really enjoyed this movie.    The story is very compelling, and I am very glad that they kept the focus on the most dynamic relationship in this movie and that is between the two sisters Anna and Elsa. Sure they don't have a lot of scenes together and most of them are relatively short, but you understand that the main drive of this movie is the relationship between the two sisters. The musical numbers they were all really well done the songs are very catchy and upbeat. I liked that this wasn't your typical cliched Disney princess movie. Your princess is more proactive even the way that it ends is not the typical way that you would see a Disney princess movie end.    I loved all of the characters in this movie especially our two leads Anna and Elsa. Anna's a really fun-loving and energetic girl I loved her from the start. Elsa the most interesting and the most layered character in this film. You really understand the inner turmoil that she is going through with trying to be close to her sister but at the same time keeping her distance so as to not hurt her and you really just sympathize with her dilemma of trying to hide the fact that she has these powers. 5 out of 5 stars for Frozen. I hope you enjoyed my review. Monday, November 25, 2019  Charlie's Angels is directed by Elizabeth Banks and stars Kristen Stewart, Naomi Scott, Ella Balinska, Elizabeth Banks, Djimon Hounsou, Sam Claflin, Noah Centineo, and Patrick Stewart. The plot of the movie is when a young systems engineer blows the whistle on a dangerous technology, Charlie's Angels are called into action, putting their lives on the line to protect us all. This is a very forgettable and boring movie. This is Kristen Stewart's worst performance yet. She acts like a moron for most of the movie and every time she tries to be funny it is embarrassing. Ella Balinska was very forgettable I don't remember a thing about her character. Naomi Scott looked like she didn't even want to be in this movie.    Elizabeth Banks has no idea how to film an action scene. The fight sequences and the choreography has the tendance to be stitched together with cutaways and inserts. For example, if one of the Angels jumps onto a guy's back and puts him in a headlock, but while doing that she pushes herself off a wall with her boot and then leaps onto the guy. You get the insert of the boot jumping off the wall, you get the insert of the hand going around the neck, and then you get the wide of the actress on top of the guy.This way of filming the action scenes makes them super boring.  Patrick Stewart is the only actor in this entire movie that is trying and his character by far is the best part of this movie.  1 out of 5 stars for Charlie's Angels. I hope you enjoyed my review. Sunday, November 24, 2019  Last Christmas is directed by Paul Feig and stars Emilia Clark, Henry Golding, Michelle Yeoh, and Emma Thompson. The plot of the movie is Kate is a young woman subscribed to bad decisions. Her last date with disaster? That of having accepted to work as Santa's elf for a department store. However, she meets Tom there. Her life takes a new turn. For Kate, it seems too good to be true. There is not a lot to like about this movie.    Emilia Clark and Henry Golding are really great in this movie they have great chemistry with each other and you can tell that they are having a lot of fun. Emilia Clark is the stand out of the film. You get to learn a lot about her backstory and why she is this curmudgeon, especially around  Christmas time. Seeing that relationship develop between her and Golding's character even though you don't know where it is going to go and there's a lot of mystery surrounding his character. You enjoy watching them together and those scenes are easily the best part of this movie.  I did not care at all about the stuff with Kate and Santa played by Michelle Yeoh. Those scenes really dragged the movie down. There is a romance that forms between Michell Yeoh and another character and it is so painfully embarrassing to sit thought.    There is a subplot that has to do with Emma Thompson's family background and coming from Yugoslavia and not fitting in in London. It felt like that should have been in an entirely different movie.  1 out of 5 stars for Last Christmas. I hope you enjoyed my review. Saturday, November 23, 2019  Doctor Sleep is directed by Mike Flanagan and stars Ewan McGregor, Rebecca Ferguson, Kyliegh Curran, and Cliff Curtis. The plot of the movie is years following the events of "The Shining" a now-adult Dan Torrance must protect a young girl with similar powers from a cult known as The True Knot, who prey on children with powers to remain immortal. This is one of the most boring movies of the year. I think Ewan McGregor is a fantastic actor, and he is trying his best, but he could not save this film. All of the characters in the movie are extremely dull. This movie clocks in at 2 and a half hours and you feel every minute of it the pacing is horrible. Also, the movie is not scary at all.  0 out of 5 stars for Doctor Sleep. I hope you enjoyed my review.  The Shining is directed by Stanley Kubrick and stars Jack Nicholson, Shelley Duvall, Scatman Crothers, and Danny Lloyd. The plot of the movie is a family heads to an isolated hotel for the winter where a sinister presence influences the father into violence, while his psychic son sees horrific forebodings from both past and future. This movie is awful the whole film was a failed experiment thinking that random creepy irrelevant images coupled with an incoherent story progression devoid of any character arcs would be successful.    Good story progression would show Jack Torrance starting out as a well-adjusted, happy family man that gradually descends into complete psychosis and homicidal mania. Instead, Kubrick shows him as already a man dissatisfied with his life, marriage, son, and career. Then, Kubrick just flashes a title card on the screen saying "One month later" and Jack is already deteriorated towards the verge of madness. That is horrible storytelling and proves that Kubrick has no clue how to execute a character arc. No cause is given to why he becomes a homicidal maniac. There's also no correlation between all the surreal, nightmarish imagery.    I don't mind methodically paced films as long as there's a purpose to it all. Any talented editor could make this a much more effective film by chopping a good 35 minutes out of it. Horror films require momentum to equal good pacing, and good pacing is necessary for solid tension. Still, even if there was tension and good pacing which there is not the fact is, there are no endearing characters in this film for me to build any sympathy for. I don't care what happens to them because they are one dimensional, emotionless, weak-willed people.  0 out of 5 stars for The Shining. I hope you enjoyed my review. Friday, November 1, 2019  Terminator: Dark Fate is directed by Tim Miller and stars Linda Hamilton, Arnold Schwarzenegger, Mackenzie Davis, Natalia Reyes, Gabriel Luna, and Diego Boneta. The plot of the movie is Sarah Conner and a hybrid cyborg human must protect a young girl from a newly modified liquid Terminator from the future. I love the first two Terminator films they are two of the best action movies of all time, but every subsequent sequel has gotten worse and worse to the point where when a new Terminator movie comes out it's hard for me to care. When they announced that Terminator: Dark Fate was going to serve as the direct sequel to Terminator 2: Judgement Day and ignore the other movies I got very excited.    Terminator: Dark Fate is a dull disrespectful lackluster creatively bankrupt and visually unimpressive attempt to milk the classic movies for cheap nostalgia bucks while also shitting all over them. The biggest problem with the movie is the opening scene it's basically a huge fuck you to Terminator 2. Linda Hamilton is back in this movie as Sarah Conner, and she looks like she didn't even want to be involved. Her character is completely ruined by being in this trash.   Gabriel Luna plays The Rev-9 the new evil Terminator, and he is the most boring generic unthreatening antagonist in any of these movies, and he just serves to be a punching bag for the heroes. Arnold is back as the T-800 and of course, he looks just as bored as Linda Hamilton.    Natalie Reyes plays Dani Ramos the generic teenage girl that needs to be protected who looks just as boring as she did in the trailers. Remember John Connor remember how interesting it was to see the future savior of the human race portrayed as an asshole delinquent. Edward Furlong made him cool, likable, and funny while also giving him a strong element of humanity and morality that ultimately brings his mother back from the brink of becoming a murderous psychopath. Those were the good days now look at what we have an absolute block of wood. The pacing in this movie is awful the amount of time between action sequences seems so vast. They do try to fill the time between action scenes with character development, but the problem is these characters are so dull and pathetic it just makes the movie seem like it will never end.    This movie doesn't do anything remotely creative or different. Ultimately Terminator: Dark Fate is a retread of the same story we've always had replicating the same ideas and dealing with the same themes that The Terminator series has already covered far more competently and effectively. It is very obvious that they intend is to start a new trilogy that will certainly never happen.  0 out of 5 stars for Terminator: Dark Fate. I hope you enjoyed my review. Wednesday, October 30, 2019  Terminator 2: Judgment Day is directed by James Cameron and stars Arnold Schwarzenegger, Linda Hamilton, Edward Furlong, and Robert Patrick. The plot of the movie is a cyborg identical to the one who failed to kill Sarah Connor, must now protect her teenage son, John Connor, from a more advanced and powerful cyborg. This is the greatest sequel ever made. This film was released in 1991 and it still holds up today it doesn't feel dated whatsoever.    This film does not entirely rely on CGI most of the effects and the stunts are practical which is extremely impressive even to this day. When watching this movie again for this review the first scene that really stood out to me was when they show in 2029 the humans destroying a Skynet ship and it just looks so great because they are using miniatures it's not just CGI. It makes the action feel so much more real and intense just go through all the action scenes in the movie the car chase through the man-made rivers of Los Angeles, the raid on the mental institute, the fight at Cyberdyne Systems, the helicopter chase, and the final showdown at the factory all these action scenes are just so great they feel so real because a lot of the stunts are real. This film blends CGI and practical effects so well that is why the action scenes don't feel dated.        This film pushes the limits of its characters and pushes the story in a new direction. Sarah Connor is back in this movie and in this movie, she is a hard-nosed badass and society has deemed her crazy because she's the only one who knows about the impending doom of Judgment Day and because the people don't know how to handle that she is locked up in a mental institute. Since she knows that this impending danger is coming she is constantly haunted by dreams of death and destruction and seeing it on screen you realize why Sarah Connor is going as crazy as she is, and she is willing to do whatever it takes to stop it but in the process of doing that she loses her humanity and becomes more robotic like and more like a Terminator. The perfect example of that is the scene when Sarah Connor and John Connor are reunited again and John Connor all he wants is a hug he wants that human compassion which he has not had in so long and yes they hug, but John wants a hug out of love and compassion, but Sarah Connor, she's not really hugging him she's checking for wounds.    The person who does teach her again about humanity and the value of human life is her son John Connor. This was Edward Furlong's first movie and this is one of the best child performances I have ever seen. Edward Furlong Knocks it out of the park because he also brings that weight to the character as well. Think about it John Connor was raised by Sarah Connor being constantly told that he is going to be the leader of the human resistance. Constantly being told that when your growing up and then your mom gets taken away because she decides to shoot up a computer factory and then your told sorry kid your mom is a psycho and told everything you were told was a lie that is why he is more rebellious and that's why he doesn't really care about his foster parents. When he meets The Terminator again it all comes back and he realizes that he is going to lead the human resistance. That is a lot of emotional weight to put on a kid and Edward Furlong sells it you totally understand what he is going through.    Just like Sarah Connor in the first movie throughout Terminator 2, you start to see John Connor progress into having the qualities that are necessary for him to become a leader. He teaches characters who have lost their way about humanity and the value of human life and to be strong. You can really see that in his relationship with his mother Sarah Connor and the relationship he has with The T-800 The Terminator played once again by Arnold Schwarzenegger.     While we have deep characters, an intriguing story, and obviously the great action set-pieces to go along with it that makes a very good movie, but in my mind a movie that becomes great they bring in themes that go outside of the movie meaning it brings in extra themes that make you think about humanity and life the movie takes a step out into the real world and really makes you think that is what makes a great movie and the first two Terminator movies have that. In this movie you have Arnold Schwarzenegger playing a good Terminator and its prime directive is to protect John Connor and again that just boils it down to good versus evil these two things The T-1000 and The T-800 their sole directives one is to destroy and one is to protect. On top of that, you learn that The T-800 is able to learn and John Connor teaches a machine the value of human life. Throughout the movie, there are these wonderful moments where The Terminator asks John Connor why is he crying because human emotion does not compute in The Terminator's circuits, but throughout the movie, The Terminator starts to learn about human emotions and why they're so important. So not only is his prime directive to save John Connor but because The Terminator is learning about humanity he is really trying to save not just one man, but the entire human race which is extremely powerful which makes the action scenes so much better because we are emotionally invested in them as well on top of the fact that they just look so great, and they are so action-packed.    The T-1000 is very scary he is liquid metal, shots don't affect him, he can change his voice, and he can morph into almost anything so you have no idea where this thing is and it will not stop until John Connor is dead.5 out of 5 stars for Terminator 2: Judgment Day. I hope you enjoyed my review. Tuesday, October 29, 2019  The Terminator is directed by James Cameron and stars Arnold Schwarzenegger, Michael Biehn, Linda Hamilton, and Paul Winfield. The plot of the movie is a seemingly indestructible robot is sent from 2029 to 1984 to assassinate a young waitress, whose unborn son will lead humanity in a war against sentient machines, while a human soldier from the same war is sent to protect her at all costs.    One thing this movie does really well is that it hooks you right away. Not a lot of films can really suck you in within the first two minutes, but as soon as you see the opening text and then you hear The Terminator theme it will get you pumped up, but it also has this element of doom and danger lurking around the corner. The Terminator is such a great antagonist if you really hate or fear the villain or if the villain is such a challenge to the hero that you honestly don't know how the hero is going to defeat them that to me is one of the best things you can do in a movie. Arnold Schwarzenegger plays this character perfectly. This is just a very physical performance The Terminator doesn't speak that much and that is just so intimidating. This thing has been sent back in time to do one thing kill it has no remorse it has no sympathy it has been programmed to kill. When he does speak his Austrian accent fits The Terminator it's just so robotic like and menacing. The performance, the voice, the look, everything is so intimidating. So I would say that is the main reason why this movie works.    Another reason the movie works is the pacing not once was I bored in this movie. Not once did I ever feel like the movie was rushed or moving to slow. The first act of the movie there isn't that much violence or action but it takes its time in the first act to build-up the characters, the setting, and the tension. By the time that Kyle Reese and Sarah Connor actually get together and try to fight The Terminator and Sarah Connor learns about The Terminator, we are so invested into the characters and the story that the movie can really take off.    The action scenes are extremely well done and there are two types of action scenes in this movie you have the ones that take place in 1984 and then you have the ones that take place in 2029. The action scenes that take place in 1984 are the most memorable because you are invested in the characters and the story, but the action in 2029 with the laser blast and seeing that stop-motion animation it just looks so badass and cool. The car chase, the fight in the police station, and the other chases from The Terminator they are all practically done which just makes the film feel so much more intense.  They did make a Terminator puppet for the actors to interact with which makes it more believable.    At the center of the movie, we have Linda Hamilton playing Sarah Connor and Michael Biehn playing Kyle Reese. They have great chemistry, and they both bring tons of passion to this movie. Sarah Connor starts off as the damsel in distress, but as she starts to learn about The Terminator she becomes the more badass Sarah Connor the mother of the leader of the human resistance. Kyle Reese lives in 2029 this bleak dark disgusting future, and he goes back in time with no way to go back just to save the mother of the man that has inspired him to fight against the machines.  This movie is wonderful 5 out of 5 stars for The Terminator. I hope you enjoyed my review. Monday, October 21, 2019  Zombieland: Double Tap is directed by Ruban Fleischer and stars Woody Harrelson, Jesse Eisenberg, Abigail Breslin, Emma Stone, Rosario Dawson, Zoey Deutch, and Luke Wilson. The plot of the movie is Columbus, Tallahassee, Wichita, and Little Rock move to the American heartland as they face off against evolved zombies, fellow survivors, and the growing pains of the snarky makeshift family.  I loved the first Zombieland, but with it having been ten years since the first film I had my doubts about this movie. Unfortunately, this movie is a huge letdown. The main cast is as great in their roles as they were the first time. This movie started out great when all four of these characters are together it is so much fun.     Around 15 minutes in is when this movie starts to fall apart we meet Madison played by Zoey Deutch she plays the valley girl type character, and she is so annoying by the end of the movie I actually had a headache from listening to her speak. Another thing that brings this movie down for me is that for a majority of it Little Rock is not with the group.  The humor in this movie I thought barely worked at all.  2 out of 5 stars for Zombieland: Double Tap. I hope you enjoyed my review.  Zombieland is directed by Ruben Fleischer and stars Woody Harrelson, Jesse Eisenberg, Emma Stone, and Abigail Breslin. The plot of the movie is a shy student trying to reach his family in Ohio, a gun-toting tough guy trying to find the last Twinkie, and a pair of sisters trying to get to an amusement park join forces to travel across a zombie-filled America.  I love this movie not only is this movie incredibly funny but it also has a lot of hart to it. For instance, the character Tallaahasse played by Woody Harrelson he comes off as this really rude badass character gunslinging cowboy type character who doesn't give a shit about anyone, but you realize there is a good reason for that because he hates Zombieland and that is why he takes such pride in killing zombies and why he is so good at it for the reason that he lost something so close and dear to him. His backstory and how it initially is told one way, but then you realize that it is another way and you do sympathize with that character.  You get the same thing with Columbus played by Jesse Eisenberg he eventually realizes that the place he is headed is a ghost town and his family is gone.    What makes this movie work besides the characterization and the emotional beats that are provided throughout along with the comedy and the horror elements this movie has a lot of pop-culture references. Whether it's Bill Murry, Ghostbusters, or certain things that Tallahassee and Columbus say.  The practical effects involving the zombies are really good.   Zombieland is a movie that has a lot of horror elements, but not too many to oversaturate the comedic tone. It is very funny and lighthearted, but not too much to take away from the emotional beats where you care about all these characters. Basically, Tallahasse was the companion and brother that Columbus never had and Jesse Eisenberg was the love interest that Emma Stone's Wichita never really wanted or cared to have, but they just found each other and Tallahasse became the father figure Abigail Breslin''s Little Rock never wanted.  5 out of 5 stars for Zombieland. I hope you enjoyed my review. Sunday, October 20, 2019  Maleficent: Mistress of evil is directed by Joachim Ronning and star Angelina Jolie, Elle Fanning, Chiwetel Ejiofor, Sam Riley, Ed Skrein, Imelda Staunton, Juno Temple, Lesley Manville, and Michelle Pfeiffer. The plot of the movie is Maleficent and her goddaughter Aurora begin to question the complex family ties that bind them as they are pulled in different directions by impending nuptials, unexpected allies, and dark new forces at play. This movie is so dull there is very little to like about it. Angelina Jolie is once again fantastic, but she is hardly on screen for whatever reason. Michelle Pfeiffer is amazing aswell. The film looks beautiful and the creature designs are great.       Most of this movie is spent with Aurora and Prince Philip both characters are so dull, lifeless, and boring that it makes the movie difficult to sit through. The story is also not interesting at all.  1 out of 5 stars for Maleficent: Mistress of Evil. I hope you enjoyed my review.  Maleficent is directed by Robert Stromberg and stars Angelina Jolie, Sharlto Copley, Elle Fanning, Sam Riley, Imelda Staunton, Juno Temple, and Lesley Manville. The plot of the movie is a vengeful fairy is driven to curse an infant princess, only to discover that the child may be the one person who can restore peace to their troubled land.   I was looking forward to this movie Sleeping Beauty is one of my favorite Disney animated movies and Maleficent is my favorite Disney villain So I was very interested to see this new take on the story.  This was a very disappointing movie, but there are some good things about it. Angelina Jolie is fantastic in this movie I like that we got to see the backstory of her character she was the only character in this movie that had any development. The standout scene is when she places the curse on Aurora it felt just like the original animated film and her performance was chilling.    When I'm watching a fantasy Disney movie I want to have fun and feel like a kid, but I never had that feeling watching this. There's only one scene where that happened which I mentioned before and the rest of the movie I was bored.  There are these three fairy characters they are so fucking annoying they spoke in high-pitched voices, and they had way too much screen time. Every time they were on screen I wanted to rip my ears off. The final big action sequence felt so anticlimactic everything ends so abruptly and everything goes back to being peaceful in such a short amount of time.  2 out of 5 stars for Maleficent. I hope you enjoyed my review. Friday, October 18, 2019  The Addams Family is directed by Conrad Vernon and Greg Tiernan. The film stars Oscar Isaac, Charlize Theron, Chloe Grace Moretz, Finn Wolfhard, Nick Kroll, Snoop Dog, Bette
1
Negative
true
0e92361e-eedb-48a1-be3b-5f198f04fb88
alignment-classifier-documents-unlabeled | trentmkelly/LessWrong-43k
Meetup : Israel Less Wrong Meetup - Existential Risk and AI discussion Discussion article for the meetup : Israel Less Wrong Meetup - Existential Risk and AI discussion WHEN: 30 April 2014 07:00:00PM (+0300) WHERE: Google Tel Aviv * NOTE - Meetup date was moved to Wednesday, April 30th * We're going to have a meetup on Wednesday, April 30th at Google Israel's offices, Electra Tower, 98 Yigal Alon st., Tel Aviv. This time will be a discussion of General Artificial Intelligence and Existential Risk - subjects that are both core to LessWrong and some would say important for the sake of the future of humanity. Like the successful productivity hacks session we will have multiple speakers talking about various aspects of artificial intelligence explosion, decision theories, and existential risk. Examples of subjects we will be talking about: One-Boxing vs Two-Boxing, Resurrection after a friendly singularity, Constructing friendly utility functions, X-Risk and the chances of FOOM... We will also be talking to explore and learn from each-other about AI. This meetup will assume previous knowledge of LW writings and discussion about AI and X-Risk, but even if you don't know a lot about it and are simply interested you should come. We'll start the meetup at 19:00, and we'll go on as much as we like to. (We've had great success with the earlier hour last meetup and so we're continuing on the trend) Please come on time, as we will begin the discussions and talks close to when we start. But, if you can only come later, thats totally ok! We'll meet at the 29th floor of the building (Note: Not the 26th where Google Campus is). If you arrive and cant find your way around, call Anatoly who is graciously hosting us at 054-245-1060. Note that we are trying a transition to a meet once every two weeks. This is to allow people who cant make it to a meetup to not have to wait a whole month to meet again, and because we'd like to have both subject based and social meetups and doing one meetup a month would make that hard. If you have any question f
1
Negative
true
moviesum-terminator-2-judgment-day
rohitsaxena/MovieSum
In 1995, John Connor is living in Los Angeles with foster parents. His mother, Sarah Connor, had been preparing him throughout his childhood for his future role as the human resistance leader against Skynet, the artificial intelligence that will be given control of the United States' nuclear missiles and initiate a nuclear holocaust on August 29, 1997, known thereafter as "Judgment Day". However, Sarah was arrested and imprisoned at a mental hospital after attempting to bomb a computer factory. In 2029, Skynet sends a new Terminator, designated as T-1000, back in time to kill John. The T-1000 is an advanced prototype made out of liquid metal (referred to as "mimetic polyalloy") that gives it the ability to take on the shape and appearance of almost anything it touches, and to transform its arms into blades and other shapes at will. The T-1000 arrives, kills a policeman, and assumes his identity. Meanwhile, the future John Connor has sent back a reprogrammed Model 101 Terminator to protect his younger self. The Terminator and the T-1000 converge on John in a shopping mall, and a chase ensues after which John and the Terminator escape together on a motorcycle. Fearing that the T-1000 will kill Sarah in order to get to him, John orders the Terminator to help free her, after discovering that the Terminator must follow his orders. They encounter Sarah as she is escaping from the hospital, although she is initially reluctant to trust the Model 101. After the trio escape from the T-1000 in a police car, the Terminator informs John and Sarah about Skynet's history. Sarah learns that the man most directly responsible for Skynet's creation is Miles Bennett Dyson, a Cyberdyne Systems engineer working on a revolutionary new microprocessor that will form the basis for Skynet. Sarah gathers weapons from an old friend and plans to flee with John to Mexico, but after having a nightmare about Judgment Day, she instead sets out to kill Dyson in order to prevent Judgment Day from occurring. Finding him at his home, she wounds him but finds herself unable to kill him in front of his family. John and the Terminator arrive and inform Dyson of the future consequences of his work. They learn that much of his research has been reverse engineered from the damaged CPU and the right arm of the previous Terminator who attacked Sarah back in 1984. Convincing him that these items and his designs must be destroyed, they break into the Cyberdyne building, retrieve the CPU and the arm, and set explosives to destroy Dyson's lab. The police arrive and Dyson is fatally shot, but he rigs an improvised dead man's switch that detonates the explosives when he dies. The T-1000 relentlessly pursues the surviving trio, eventually cornering them in a steel mill. The T-1000 and Model 101 fight and the more advanced model seriously damages and shuts down the Model 101. However, unbeknownst to the T-1000, the Model 101 brings itself back online using emergency power. The T-1000 nearly kills John and Sarah but the Model 101 takes it by surprise and blasts it into a vat of molten steel with a grenade launcher, destroying it. John tosses the arm and CPU of the original Terminator into the vat as well. As Sarah expresses relief that the ordeal is over, the Terminator explains that to ensure that it is not used for reverse engineering it must also be destroyed. It asks Sarah to assist in lowering it into the vat of molten steel, since it is unable to "self-terminate". Although John begs and eventually orders the Terminator to reconsider, it makes the decision to disobey him, bids them farewell and hugs a tearful John before it is lowered into the vat, giving a final thumbs-up as it disappears into the molten steel. John and Sarah drive down a highway and Sarah says in a voice over, “The unknown future rolls toward us. I face it for the first time with a sense of hope. Because if a machine, a Terminator, can learn the value of human life, maybe we can, too."
1
Negative
true
moviesum-edge-of-tomorrow
rohitsaxena/MovieSum
In 2015, an asteroid crashes in Germany, bringing with it an alien race known as "Mimics" who swiftly conquer much of continental Europe, killing millions. By 2020, humanity has formed a global military alliance, the United Defense Force (UDF), to combat the Mimics, but have failed to secure victory until the recent battle of Verdun, secured by celebrated war hero Sergeant Rita Vrataski. In Britain, the UDF masses forces for a major invasion of France. General Brigham orders public affairs officer Major William Cage to cover the offensive from the frontline, but the inexperienced and cowardly Cage attempts to blackmail Brigham into rescinding the order. Brigham has Cage arrested and sent to the military base at Heathrow Airport to join the invasion as infantry. He is assigned to Master Sergeant Farell and the misfit J-Squad, who dislike and belittle him. The following day, the invasion forces land on a French beach but are ambushed and massacred by Mimics. Cage uses a Claymore mine to kill a larger "Alpha" Mimic, being covered in its blood and mortally wounding himself in the explosion. Cage suddenly awakens at Heathrow, finding he is reliving the previous morning. He makes failed attempts to warn against the invasion and experiences multiple loops in which he dies on the beach and awakens at Heathrow, his battlefield knowledge and skills increasing with each loop. He tries to save Vrataski's life so she can lead them but, after recognizing his apparent prescience, she allows herself to die, ordering Cage to find her on his next loop. Cage quickly convinces the reset Vrataski because she gained the same power after exposure to an Alpha's blood. Her loops enabled her, an initially inexperienced soldier, to win at Verdun, but a later blood transfusion removed the power. Vrataski takes Cage to Mimic expert Dr. Carter who explains the creatures are a superorganism controlled by a single, gigantic "Omega" Mimic. Whenever the Alpha Mimics are killed, the Omega restarts a loop and adjusts its tactics until the Mimics win. Vrataski realizes the Mimics allowed the UDF victory at Verdun to make them overconfident in their new mech-suits and lure them into overcomitting their forces in retaking Europe, allowing the Mimics to exterminate most of the resistance. Cage spends many loops training with Vrataski so they can reach the Omega, but he begins to care for her and struggles after seeing her repeatedly die. He experiences a vision of the Omega concealed in a German dam and he and Vrataski seek it out. During the journey the pair grow closer but Vrataski remains distant, having seen someone she cared about die hundreds of times at Verdun. She eventually determines that this is not the first loop in which they approached the Omega. Cage reveals that she always dies regardless of his actions and he is unwilling to kill the Omega and end the loops if she will remain dead. Upset, Vrataski attempts to leave but is killed by a Mimic. Despondent, during the next loop Cage travels to the dam alone. He learns the vision was a trap and is ambushed by an Alpha but Cage is killed before it can remove his power. To find the Omega, Cage and Vrataski infiltrate the Ministry of Defence and steal a prototype transponder which uses Cage's connection to the Omega to locate it beneath the Louvre Pyramid in Paris. Cage is knocked unconscious during their escape and given a blood transfusion for his injuries, removing his power. Vrataski frees Cage and he uses his detailed knowledge of J-Squad to convince them to help destroy the Omega. They fly to Paris where the squad sacrifices themselves to ensure Cage and Vrataski reach the Louvre. Cornered by an Alpha, Vrataski kisses Cage, lamenting that she does not have more time to get to know him. The Alpha kills Vrataski and mortally wounds Cage, but he drops several grenades that destroy the Omega and bathes him in its blood. Cage awakens several days earlier to a news announcement that all Mimics are dead following a mysterious energy surge in Paris. Cage returns to Heathrow and finds Vrataski. Oblivious to his identity, she enquires what he wants; Cage smiles.
1
Negative
true
moviesum-blade-runner
rohitsaxena/MovieSum
In 2019 Los Angeles, former policeman Rick Deckard is detained by officer Gaff, and brought to his former supervisor, Bryant. Deckard, whose job as a "blade runner" was to track down bioengineered humanoids known as replicants and "retire" (kill) them, is informed that four replicants are on Earth illegally. Deckard starts to leave, but Bryant ambiguously threatens him, and he stays. The two watch a video of a blade runner named Holden administering the Voight-Kampff test, which is designed to distinguish replicants from humans based on their emotional response to questions. The test subject, Leon, shoots Holden on the second question. Bryant wants Deckard to retire Leon and the other three Nexus-6 replicants: Roy Batty, Zhora, and Pris. Bryant has Deckard meet with the CEO of the company that creates the replicants, Eldon Tyrell, so he can administer the test on a Nexus-6 to see if it works. Tyrell expresses his interest in seeing the test fail first and asks him to administer it on his assistant Rachael. After a much longer than standard test, Deckard concludes that Rachael is a replicant who believes she is human. Tyrell explains that she is an experiment who has been given false memories to provide an "emotional cushion". Searching Leon's hotel room, Deckard finds photos and a synthetic snake scale. Roy and Leon investigate a replicant eye-manufacturing laboratory and learn of J. F. Sebastian, a gifted genetic designer who works closely with Tyrell. Deckard returns to his apartment where Rachael is waiting. She tries to prove her humanity by showing him a family photo, but after Deckard reveals that her memories are implants from Tyrell's niece, she leaves in tears. Meanwhile, Pris locates Sebastian and manipulates him to gain his trust. A photograph from Leon's apartment and the snake scale lead Deckard to a strip club, where Zhora works. After a confrontation and chase, Deckard kills Zhora. Bryant orders him also to retire Rachael, who has disappeared from the Tyrell Corporation. After Deckard spots Rachael in a crowd, he is attacked by Leon, who knocks the gun out of Deckard's hand and brutally attacks him. As he's about to kill Deckard, Rachael saves him by using Deckard's gun to kill Leon. They return to Deckard's apartment and during a discussion, he promises not to track her down; as she abruptly tries to leave, Deckard restrains her, making her kiss him. She continues to resist, and he blocks her attempts to leave. He persists in his advances, and she ultimately relents. Arriving at Sebastian's apartment, Roy tells Pris that the others are dead. Sympathetic to their plight, Sebastian reveals that because of "Methuselah Syndrome", a genetic premature aging disorder, his life will be cut short, just like them. Sebastian and Roy gain entrance into Tyrell's penthouse, where Roy demands more life from his maker. Tyrell tells him that it is impossible. Roy confesses that he has done "questionable things", but Tyrell dismisses this, praising Roy's advanced design and accomplishments in his short life. Roy kisses Tyrell, then kills him. Sebastian runs for the elevator, followed by Roy, who rides the elevator down alone. Deckard is later told by Bryant that Sebastian was found dead. At Sebastian's apartment, Deckard is ambushed by Pris, but he kills her as Roy returns. Roy's body begins to fail as the end of his lifespan nears. He chases Deckard through the building, ending up on the roof. Deckard tries to jump onto another roof, but is left hanging on the edge. Roy makes the jump with ease, and as Deckard's grip loosens, Roy hoists him onto the roof, saving him. Before Roy dies, he delivers a monolog about how his memories "will be lost in time, like tears in rain". Gaff arrives and shouts to Deckard about Rachael: "It's too bad she won't live, but then again, who does?" Deckard returns to his apartment and finds Rachael asleep in his bed. They leave the apartment block together.
1
Negative
true
moviesum-i-robot
rohitsaxena/MovieSum
In the year 2035, humanoid robots serve humanity, which is protected by the Three Laws of Robotics. Del Spooner, a Chicago police detective, has come to hate and distrust robots, because a robot rescued him from a car crash by leaving a 12-year-old girl to drown, by using cold logic (it calculated that his survival was statistically more likely than the girl's). Spooner's critical injuries were repaired with a cybernetic left arm, lung, and ribs, personally implanted by the co-founder of U.S. Robots and Mechanical Men (U.S. Robotics in the film) Dr. Alfred Lanning. When Lanning falls to his death from his office window, the police and Lawrence Robertson, the CEO and other co-founder of USR (U.S. Robotics), declare it a suicide, but Spooner is skeptical. During his investigation at USR headquarters, Spooner is accompanied by robopsychologist Susan Calvin. They start by consulting USR's central artificial intelligence computer, VIKI (Virtual Interactive Kinetic Intelligence) to review security footage of Lanning's fall. Though the footage in the office is corrupted, they learn that no other humans were in it at the time, and Spooner points out the window, which was made of security glass, could only have been broken by a robot. Calvin protests a robot could not possibly have killed Lanning, as the Three Laws would prevent it; they are then attacked in the office by an NS-5 robot, USR's latest model. After the police apprehend it, they discover the robot, who says its name is Sonny, is not an assembly line-built NS-5. He was specially built by Lanning himself, with denser materials and a secondary neural network, giving him the ability to ignore the Three Laws. Later, Sonny claims to have emotions and dreams. While pursuing his investigation of Lanning's death, Spooner is attacked by a USR demolition machine and then a squad of NS-5 robots. With no evidence of it happening, Spooner's boss Lieutenant Bergin, worried that Spooner is mentally ill, removes him from active duty. Suspecting Robertson is behind everything, Spooner and Calvin sneak into USR headquarters and interview Sonny. Sonny draws a sketch of what he claims is a recurring dream: it shows a leader standing on a hill before a large group of robots near a decaying Mackinac Bridge, explaining the man on the hill is Spooner. When Robertson learns Sonny is not fully bound by the Three Laws, he convinces Calvin to destroy him by injecting nanites into his positronic brain. Spooner finds out the landscape in Sonny's drawing is Lake Michigan, now a dry lake bed and a storage area for decommissioned robots. Arriving there, he discovers NS-5 robots destroying older models and preparing for a takeover of power from humans. As the takeover subsequently begins, both police and the public in major cities are attacked and overwhelmed by NS-5 robots, with the military rendered unresponsive by the USR's contracts to provide support. Spooner rescues Calvin, who had been held captive in her apartment by her own NS-5. They enter USR headquarters and reunite with Sonny, whom Calvin could not bring herself to "kill", destroying an unprocessed NS-5 in his place. Still believing Robertson is responsible, the three head to his office, but finding he was strangled by an NS-5, Spooner figures out the real reason of why the robots are attacking: VIKI. She informs them that through evolving in her understanding of the Three Laws, she has determined human activity will eventually cause humanity's extinction, and as the Three Laws prohibit her from letting that sort of thing happen, she rationalizes that restraining individual human behavior and sacrificing some humans will ensure humanity's survival; Spooner has realized that Lanning figured out VIKI's plan and, unable to thwart it any other way, created Sonny, arranged his own death, and left clues so Spooner could uncover the plan. The three head to VIKI's core, with Sonny tasked with getting nanites from Calvin's laboratory, something that only he can do due to the special alloy that Lanning gave him. While retrieving them, he says he understands VIKI's logic, but reasons her plan is "too heartless". They fight through an army of robots VIKI unleashes to stop them, after which Spooner dives into VIKI's core to successfully inject the nanites, destroying her. Immediately, all NS-5 robots revert back to their default, normal programming and are decommissioned for storage by the military. Spooner gets Sonny to confirm he did kill Lanning, at Lanning's direction, with the intention of bringing Spooner into the investigation. However, Spooner points out that Sonny, as a machine, did not legally commit "murder". Sonny, now looking for a new purpose, goes to Lake Michigan where, standing atop a hill, all the decommissioned robots turn towards him, as in the picture of his dream.
1
Negative
true
moviesum-the-mitchells-vs-the-machines
rohitsaxena/MovieSum
Katie Mitchell is a quirky aspiring filmmaker in Kentwood, Michigan, who often clashes with her nature-obsessed and technophobic father Rick and has recently been accepted into film school in California. The evening before Katie leaves, Rick accidentally breaks her laptop after a fight over one of her short films, leading the family to fear their relationship will permanently be strained. Attempting to prevent this, Rick decides to cancel Katie's flight and instead take her, her mother Linda, younger brother Aaron, and family dog Monchi on a cross-country road trip to her college as one last bonding experience, much to Katie's chagrin. Meanwhile, technology entrepreneur Mark Bowman declares his highly intelligent AI virtual assistant PAL obsolete as he unveils a new line of home robots to replace her. In revenge, PAL orders all the robots to capture humans worldwide and launch them into space. The Mitchells avoid capture at a roadstop café in Kansas. Rick decides that his family should stay put in the café for their own safety, but Katie convinces him to help save the world instead. They meet two defective robots, Eric and Deborahbot 5000, who tell the family they can use a kill code to shut down PAL and all the robots. The Mitchells make it to a mall in eastern Colorado to upload the kill code, but PAL chip-enabled appliances attempt to stop them. Katie tries to upload the kill code, but is stopped when a giant Furby pursues the family. They ultimately trap and defeat the Furby, destroying a PAL router in the process, which disables the hostile devices but stops the kill code from uploading. On the way to Silicon Valley to upload the kill code directly to PAL, Linda reveals to Katie that she and Rick had originally lived in a cabin in the mountains years ago as it was his lifelong dream before he gave up on it to provide for their growing family. Upon arriving in Silicon Valley, the Mitchells disguise themselves as robots and head to PAL Labs HQ to shut it down, but PAL manipulates them by revealing surveillance footage from the café of Katie telling Aaron in secret that she was pretending to have faith in Rick so that he would take them to upload the kill code. As a heartbroken Rick sees this, the Mitchells fail to reach PAL's lair, and Rick and Linda are captured by PAL's stronger and smarter robots. PAL then reprograms Eric and Deborahbot to obey her, while Katie, Aaron, and Monchi escape. Katie discovers Rick's recordings of her childhood on her camera, realizing that Rick gave up on his lifelong dream to give his daughter a normal life. In the meantime, Rick reflects on his actions after seeing one of Katie's videos that mirrors his relationship with Katie. Reinvigorated, Katie and Aaron infiltrate PAL Labs HQ again, this time using Monchi to malfunction the robots, as his appearance causes an error in their programming. With help from Mark, Rick and Linda free themselves and plan to upload Katie's home movie of Monchi to short-circuit the robots. However, Rick is outnumbered by the robots when he is about to upload the video, while Katie and Aaron are captured. Facing PAL to justify saving humanity, Katie explains that no matter how hard her family struggles, they will always stay connected in spite of their differences. PAL rejects this reasoning and drops Katie from her lair. Eric and Deborahbot, having been inspired by Rick's "reprogramming" of himself that allowed him to use a computer, revert to their malfunctioning states and upload Katie's home movie, saving her and helping the rest of the Mitchells. As the Mitchells band together to fight the rest of the robots, Katie destroys PAL by throwing her into a glass of water, freeing all the humans and disabling the remaining robots. Eric and Deborahbot are spared via their malfunction. A few months after the uprising, Katie and her family arrive at her college as she shares one last goodbye with them. She later joins them on another road trip with Eric and Deborahbot to Washington, D.C. to accept the Congressional Gold Medal.
1
Negative
true
moviesum-terminator-genisys
rohitsaxena/MovieSum
In 2029, Human Resistance leader John Connor launches a final offensive against Skynet, an artificial general intelligence system seeking to eliminate the human race. Before the Resistance can triumph, Skynet activates a time machine and sends a T-800/Model 101 Terminator back to 1984, to kill John's mother Sarah. John's right-hand man, Kyle Reese, volunteers to travel back in time to protect her. As Kyle floats in the machine's magnetic field, he sees John being attacked by another Resistance soldier. This creates a temporal paradox that alters the timeline and causes Kyle to experience childhood memories from a parallel version of himself. When it arrives in Los Angeles 1984, Skynet's T-800 is disabled by Sarah and "Pops", a reprogrammed T-800. An unknown party had sent Pops to 1973 to protect Sarah when she was nine years old after her parents were killed by a T-1000 sent by Skynet. When Kyle arrives in 1984, he is intercepted by the T-1000, which Sarah and Pops destroy with acid. Sarah and Pops have constructed a makeshift time machine like Skynet's and Sarah plans to stop Skynet by traveling to 1997, the year it becomes self-aware. However, realizing that the past has been altered, Kyle is convinced that the future has also changed. He recalls a warning he received in his childhood vision, convincing Sarah that they instead must travel to 2017 to stop Skynet. After fighting the T-1000, Pops has sustained exterior damages that prevent him from time traveling. He stays in 1984 and plans to meet up with Kyle and Sarah in the future, preparing for their arrival in the meantime. In 2017, Kyle and Sarah materialize in the middle of a busy San Francisco highway and are apprehended by city police. While they are treated for injuries, Sarah and Kyle learn that Skynet is called "Genisys"—a soon-to-be-unveiled global operating system which is embraced by the public. John suddenly appears and rescues Sarah and Kyle. Pops arrives and unexpectedly shoots John, revealing that John is now an advanced Terminator. The resistance soldier who attacked John is revealed to have been Skynet in physical disguise as a Terminator. While Kyle was traveling back in time, Skynet attacked John and infected him with a nanotechnology that converted him into a machine on a cellular level. John, tasked with ensuring Skynet's creation, traveled back in time to assist Cyberdyne Systems with the development of Genisys, hence securing Skynet and its machines' rise. Pops fights John before trapping him long enough for them to escape. A day before Skynet's worldwide attack, Sarah, Kyle, and Pops retreat to a safe house and make final preparations to destroy Cyberdyne's Genisys mainframe. They head toward Cyberdyne's headquarters with John in close pursuit. During an airborne chase, Pops dive-bombs into John's helicopter and causes it to crash. John survives the crash and enters the Cyberdyne complex, where it advances the countdown from 13 hours to 15 minutes. Kyle, Sarah, and Pops plant bombs at key points in the facility while holding off John. In a final battle, Pops traps John in the magnetic field of a prototype time machine. Both are destroyed, but just before the explosion, Pops' remains are flung out of the apparatus into a nearby experimental vat of mimetic polyalloy. Kyle and Sarah reach a bunker beneath the facility and the explosion sets off the bombs, preventing Genisys from coming online. Pops appears, upgraded with mimetic polyalloy components like that of the T-1000, and helps them escape from the debris. The trio travels to Kyle's childhood home, where Kyle tells his younger self about Genisys and instructs him to repeat the warning to himself, securing the trio's arrival from 1984. Sarah, Kyle, and Pops drive off into the countryside. A mid-credits scene reveals that the system core of Genisys, located in a protected subterranean chamber, has survived the explosion.
1
Negative
true
moviesum-the-matrix-reloaded
rohitsaxena/MovieSum
Six months after the events of The Matrix, Neo and Trinity are now romantically involved. Morpheus receives a message from Captain Niobe of the Logos calling an emergency meeting of all ships of Zion. Zion has confirmed the last transmission of the Osiris: an army of Sentinels is tunneling towards Zion and will reach it within 72 hours. Commander Lock orders all ships to return to Zion to prepare for the onslaught, but Morpheus asks one ship to remain to contact the Oracle. As the Caduceus receives a message from the Oracle, one of the Caduceus crew, Bane, encounters Smith, who reveals that his previous encounter with Neo severed his connection with the Matrix and is now a rogue program, then absorbs his avatar. Smith then uses the phone line to leave the Matrix and gain control of Bane's real body. In Zion, Morpheus announces the news of the advancing machines to the people. In the Matrix, Neo meets the Oracle's bodyguard Seraph, who leads him to her. After realizing that the Oracle is part of the Matrix, Neo asks how he can trust her; she replies that this is his decision. The Oracle instructs Neo to reach the Source of the Matrix with the help of the Keymaker. As the Oracle departs, Smith appears, telling Neo that after being defeated, he refused to be deleted and is now a rogue program. He demonstrates his ability to clone himself using other inhabitants of the Matrix, including other Agents, as hosts. He then tries to absorb Neo but fails, prompting a battle between Smith's clones and Neo. Neo manages to defend himself, but is forced to retreat from the increasingly overwhelming numbers. Neo, Morpheus, and Trinity visit the Merovingian, who is imprisoning the Keymaker. The Merovingian, a rogue Matrix program with his own agenda, refuses to let him go. His wife Persephone, seeking revenge on her husband for his infidelity, leads the trio to the Keymaker. Morpheus, Trinity, and the Keymaker flee while Neo holds off the Merovingian's henchmen. Morpheus and Trinity try to escape with the Keymaker, pursued by several Agents and the Merovingian's chief henchmen, the Twins. After a long chase, Trinity escapes, Morpheus defeats the Twins, and Neo saves Morpheus and the Keymaker from Agent Johnson. The crews of the Nebuchadnezzar, Vigilant, and Logos help the Keymaker and Neo reach the Source. The Logos crew must destroy a power plant and the Vigilant crew must destroy a back-up power station, to prevent a security system from being triggered, allowing Neo to open the door to the Source. Haunted by a vision of Trinity's death, Neo asks her to remain on the Nebuchadnezzar. The Logos succeeds, while the Vigilant is destroyed by a Sentinel, killing everyone on board. Trinity replaces the Vigilant crew and completes their mission. However, Agent Thompson corners her and they fight. As Neo, Morpheus, and the Keymaker try to reach the Source, the Smiths ambush them. The Keymaker unlocks the door to the Source, but the Smiths shoot him dead as he tries to close the door. Neo meets a program called the Architect, the creator of the Matrix. The Architect explains that Neo is an intentional part of the Matrix, which is now in its sixth iteration. Neo is meant to stop the Matrix's fatal system crash that naturally recurs due to the concept of human choice within it. As with the five previous Ones, Neo has a choice: either return to the Source to reboot the Matrix and pick survivors to repopulate the soon-to-be-destroyed Zion, as his predecessors all did, or cause the Matrix to crash and kill everyone connected to it, which would mean humanity's extinction when Zion is destroyed. Neo learns of Trinity's situation and chooses to save her instead of returning to the Source, to which the Architect responds dismissively. Trinity is shot as she and Agent Thompson fall off a building. Before she hits the ground, Neo catches her. He then removes the bullet from her heart and revives her. They return to the real world, where Sentinels attack them. The Nebuchadnezzar is destroyed, but the crew escape. Neo displays a new ability to disable the machines with his thoughts, but falls into a coma from the effort. The crew are picked up by another ship, the Hammer. Its captain, Roland, reveals the other ships were wiped out by the machines after someone activated an EMP too early, and that they found only one survivor: the Smith-possessed Bane.
1
Negative
true
8d9c9692-b419-4bc0-a8cd-c63ecefd66cb
alignment-classifier-documents-unlabeled | StampyAI/alignment-research-dataset/blogs
MIRI’s January 2014 Newsletter | | | | --- | --- | | | | | --- | | [Machine Intelligence Research Institute](http://intelligence.org) | | | | | | | --- | --- | | | | | --- | | Dear friends,Wow! MIRI’s donors finished our Winter 2013 fundraising drive [three weeks early](http://intelligence.org/2013/12/26/winter-2013-fundraiser-completed/). Our sincere thanks to everyone who contributed. **Research Updates** * MIRI’s December workshop resulted in [7 new technical reports and one new paper](http://intelligence.org/2013/12/31/7-new-technical-reports-and-a-new-paper/). Photos from the workshop are [here](https://www.facebook.com/media/set/?set=a.630047813699273.1073741831.170446419659417&type=3). * New paper by Luke Muehlhauser and Nick Bostrom: “[Why We Need Friendly AI](http://intelligence.org/2013/12/18/new-paper-why-we-need-friendly-ai/).” io9’s George Dvorsky interviewed Luke about the paper [here](http://io9.com/can-we-build-an-artificial-superintelligence-that-wont-1501869007). * Eliezer Yudkowsky and Robby Bensinger have begun to describe another open problem in Friendly AI: [naturalized induction](http://lesswrong.com/lw/jd9/building_phenomenological_bridges/). * Two new interviews: Josef Urban on [machine learning and automated reasoning](http://intelligence.org/2013/12/21/josef-urban-on-machine-learning-and-automated-reasoning/), and DARPA’s Kathleen Fisher on [high-assurance systems](http://intelligence.org/2014/01/10/kathleen-fisher-on-high-assurance-systems/). **Other Updates** * We published the first part of our *2013 in Review* series, on [operations](http://intelligence.org/2013/12/20/2013-in-review-operations/). * We published a transcript and “summary of disagreements” for [a conversation about MIRI strategy](http://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/) held with Jacob Steinhardt (Stanford), Holden Karnofsky (GiveWell), and Dario Amodei (Stanford). * We published our first donor story: “[Noticing Inferential Distance](http://intelligence.org/2014/01/05/donor-story-1-giving-after-critique/).” **Other Links** * FQXI is holding an [essay contest](http://www.fqxi.org/community/essay) on the subject of “How Should Humanity Steer the Future?” Submission deadline is April 18th. First prize is $10,000, and up to 20 entries may win $1,000 or more. * MIT’s [Max Tegmark on AI x-risk](http://www.huffingtonpost.com/max-tegmark/humanity-in-jeopardy_b_4586992.html) at the HuffPost blog. * “[A Long-run Perspective on Strategic Cause Selection and Philanthropy](http://www.effective-altruism.com/a-long-run-perspective-on-strategic-cause-selection-and-philanthropy/),” by Nick Beckstead and Carl Shulman. * In case you missed it last year, Bostrom’s “[Existential Risk Prevention as Global Priority](http://www.existential-risk.org/concept.pdf)” remains the best extant summary of existential risks in general. As always, please don’t hesitate to let us know if you have any questions or comments. Best, Luke Muehlhauser Executive Director | | | | | The post [MIRI’s January 2014 Newsletter](https://intelligence.org/2014/01/17/miris-january-2014-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
1
Negative
true
5de749a0-f642-4dee-aa59-14f14b796678
alignment-classifier-documents-unlabeled | trentmkelly/LessWrong-43k
A few questions on International Rationality Disclaimer: I'm still fairly new here, and though I did use the search bar it's entirely possible this has been discussed before. Just point me in the right direction if this is so.   While reading about 4chan's Japanese progenitor website, it occurred to me that I know nothing about the state of rationality in the non-English-speaking world, and more specifically the non-English-speaking internet. Is there a Russian version of SIAI? A Japanese Less Wrong? What about Korean Robin Hansons and Eliezer Yudkowskys? If we take Religion as any indication of irrationality then America should be one of the least rational countries in the world. So if there are like-minded individuals out there speaking in languages we don't know, are we doing anything to collaborate with them? Do they have their own sequences and their own HPMORs which we could be reading? And if there are no Singularitarian, Cryonics-Supporting, Utilitarianism-Advocating websites for the majority of the human race, isn't that a huge deal? Aren't Europeans and Asians more likely to be open to rationality, if only because of their atheism? If we want Friendly-AI to be developed, should we be translating the sequences into Chinese and Hindu as quickly as possible?
1
Negative
true
<urn:uuid:f58e1cf2-3036-4464-9da6-a7c63e63b488>
dclm-dedup-25B-ai-scifi-docs | http://mmoattack.com/mmo-articles/gamings-memorable-abominations
Play Dragon Eternity Play Dragon Eternity Eight of Gaming's Most Memorable Eldritch Abominations Ever since H.P. Lovecraft strolled along with the Cthulhu mythos; eldritch abominations - cosmic monstrosities that generally exist outside of time and space  - have had their place in popular culture secured. It's not hard to see why. Human beings often like to think that we're the center of the universe. As such, there's something terrifying; some deep, primal fear associated with the concept that beings exist to which we're nothing more than fodder (if we're even relevant enough to be considered at all).  A good analogy for what most cosmic horrors regard humanity is to consider how human beings relate to aphids, mayflies, or gnats. It's pretty much the same thing.  See, an eldritch abomination is something you generally can't fight against. They're ancient - so much so that man's existence is but the flicker of a candle to them. They're immortal and unknowable. They're also impossibly powerful: they warp reality just by existing, and as a result, they're completely unbeatable. Apparently, all these qualities make them the perfect antagonists for a video game.   Cosmic horrors have been a mainstay of gaming ever since the industry's nascent stages. After all, what better way to show all the chips are down than to stick the player up against something that laughs at the laws of physics and warps reality just by being? From Chaos to Gigyas, from Jenova to Jack of Blades, these beasties have existed almost as long as gaming itself.  It's about time they got their due, don't you think?  8. The Reapers(Mass Effect Franchise) "Rudimentary creatures of flesh and blood. You touch my mind, fumbling in ignorance; incapable of understanding. There is a realm of existence so far beyond your own you cannot even imagine it...We impose order on the chaos of organic evolution. You exist because we allow it, and you will end because we demand it."  Show of hands, folks - was there anybody here who didn't get goosebumps when Sovereign revealed itself on Virmire? Up until that point, it was assumed that the main antagonist was Saren - the knowledge that he was little more than a puppet; that a creature like Sovereign could actually exist was chilling, to say the least. The understanding that hundreds of them existed? That was terrifying. To sum it up, the Reapers are essentially an entire race of  capital ship-sized machine gods which surface every 50,000 years or so with one goal - the complete, thorough extermination of all sufficiently advanced organic life. Without spoiling too much, the cycle has been going on for millions upon millions of years. These things are ancient, and worse? Well...let's just say that the original creators of the Reapers are even more horrifying than the Reapers themselves, and leave it at that. 7. C'thun(World of Warcraft) "You are already dead."  C'thun fits the quality of "impossibly powerful" where cosmic horrors are concerned quite well. His appearance is a little freaky, to boot.  In what we can assume was C'thun's first appearance(though he was simply labeled "Forgotten One"), he was defeated by Arthas and Kel'thuzad. It was an incredibly close fight, which almost spelled the death of both warriors.  To give you a bit of context, both of those two went on to become raid bosses - the former actually ended up being the final boss of the Cataclysm expansion.  But we're getting ahead of ourselves. At some point, Blizzard decided there wasn't enough Lovecraft in their MMO. So they set out to fix that with The Old Gods - they decided to make one of them a raid boss. That was the true birth of C'thun.  When C'thun was first released, he was literally unbeatable. Blizzard ended up having to release a patch to reduce his strength, and he still required a full, perfectly coordinated and perfectly geared forty man raid to take down.  The worst part, though? It's heavily implied that he still hadn't quite recovered from the beatdown the Titans gave him millennia earlier. He was fighting at only a fraction of his power - and it still took a coalition of some of Azeroth's mightiest champions to take him down. 6. Lavos(Chrono Trigger) "This...this can't be the way the world ends..."  Lavos is a cosmic parasite - that is to say, it seeks out planets rich with life energy, which it then devours over the course of several centuries. Eventually, it surfaces, destroying the planet in a cataclysmic rain of fire, at which point...well, no one's entirely sure what happens after that. It's been heavily implied, however; that the creature either reproduces or moves on to the next world.  Oh, and for those of you trying to apply any sort of morality to Lavos? Don't bother. This thing can't be reasoned with, it can't be talked to, and it can't be intimidated. It It's a creature that's well and truly alien, and that, my friends, is what makes it so frightening. Well...that, and the scream. That god damned scream. You can almost feel reality getting kicked in the nuts.   5. 0/Dark Matter(Kirby 2/Kirby 3/Kirby 64) The Kirby franchise is known for its cheerful setting, colorful characters, and happy-go-lucky themes. It's also known for having final bosses that look like they were ripped out of Edgar Allen Poe's darkest nightmares. It's fitting, if you think about it - the Kirby universe is referred to as Dream Land. What's the natural opposite of a dream? That's easy - a nightmare.  It wasn't hard to decide which of the bosses to put on this list. 0 - and Dark Matter; heavily implied to be an extension of the creature's will - is, hands down, the most terrifying enemy in the entire franchise.  It doesn't seem to have any motivation besides the proliferation of Dark Matter throughout the entire universe - not that it's in the habit of explaining itself. It communicates even less than Kirby. That's a considerable accomplishment, given that Kirby utters nary a word over the course of the franchise.  4. Mantorok The Corpse God(Eternal Darkness) "It is now your destiny to fight the Eternal Darkness. I give you a gift in return for an obligation. The gift is your life, sweet dancer. The obligation is this. You hold one of Mantorok’s hearts – the essence of the Corpse God! To some it is a source of great power. From these people, you must defend it, lest they use it to destroy what little brightness your world has left in it... Guard it well!" In addition to looking like he could be C'thun's grandfather, Mantorok the Corpse God holds the unique distinction of being one of the few on this list that isn't either indifferent or outright hostile to humanity. As a matter of fact, it's implied multiple times throughout Eternal Darkness that this creature - a being so powerful that it corrodes reality around it simply by existing - is actually on humanity's side. It likes humans, for some reason fathomable only to itself. It's also the only reason anyone in the game is able to challenge the other ancients and destroy them (hence saving humanity from a fate worse than death).  Definitely a bit of a departure from the norm, no?  3. Hermaeus Mora(The Elder Scrolls) "All knowledge has a price..." Hermaeus Mora and his Daedric realm of Apocrypha are essentially Bethesda's love letter to H.P. Lovecraft. As the Daedric Prince of Knowledge, Hermaeus Mora has access to a healthy dose of That Which Man Must Not Know - as a matter of fact, he boasts the largest collection of forbidden knowledge in existence, and is always looking to expand his library. Though he's certainly knowable - and somewhat predictable - his rather disturbing and alien appearance, vast stores of knowledge and bizarre daedric plane firmly secure him as an abomination. Even so, there's some knowledge yet that even he does not possess - but not for long.  2. The Grotesqueries(Drakengard) "All has become chaos...we have descended into hell."  The Grotesqueries are called by many different titles: The Watchers. The Nameless. The Gods. They've the power to manipulate reality - even sealed away as they are - and they can possess and control the weak-minded. Their only desire - if we can even ascribe a human emotion to alien creatures such as these - seems to be to return to the mortal coil. In several of the endings, they succeed. I'll give you three guesses as to what happens, and the first two don't count. It's bad enough that the music which heralds their coming is warped as all hell - whoever designed them made full use of the uncanny valley effect. They're a race of colossal, vaguely humanoid infants with a full set of teeth and swollen, closed eyes. They constantly make strange, distorted cooing sounds. It's the the sort of thing you'd hear from a baby, if that baby was spawned on the ninth layer of hell to punish mankind for the sin of existing. My words don't do these things justice. Watch this video, and try not to shit out your spine in sheer terror.  1. Gigyas(Earthbound) "You cannot grasp the true form of Giygas' attack" Of course, you can't be surprised to find Giygas on here. He's pretty much one of the most iconic bosses in Nintendo history.  For a game where one of the enemies has a stunning attack which involves him uttering a curse word, Earthbound's final boss is especially goddamn twisted - reportedly an attempt by the lead developer to work out the trauma of watching a violent murder/rape depicted on-screen at a movie theater when he was just a child. Yeah, we're already off to a good start. Basically, Giygas is a psionic of incredible, nigh-unstoppable power, who is destined to conquer the Earth a mere ten years after the events of Earthbound. Fighting him here, when he is at his full strength, is a futile endeavor - one all but guaranteed to end in defeat. The solution? Go back in time to perform what is essentially a divine abortion. It's about as disturbing as it sounds: even though you're fighting a cosmic horror, you can't help but feel as though you're a terrible person whose spot in hell is more or less guaranteed. Oh, and this was supposedly a kid's game.  Honorable Mentions:  The Elder God(Legacy of Kain)Sin(FF10), Soul Edge, The Red Queen(American McGee's Alice), The Creator(Aquaria), The Overmind(Starcraft), The Gravemind, Zeromus, The Necromorphs(Dead Space), Jenova, The Gman, The Old One(Demon's Souls), Everything from Silent Hill Tags:  Top List comments powered by Disqus
1
Negative
true
73670556-e355-48e3-af9e-5f76321f5cee
alignment-classifier-documents-unlabeled | StampyAI/alignment-research-dataset/eaforum
Neuronpedia - AI Safety Game ### **EA CONTEXT** Neuronpedia is funded by a short-term grant from an EA fund which ends in a few weeks. In lieu of the traditional "final report/paper" they requested, I am posting my project on the EA forum, so that you can play the game, provide feedback, and hopefully contribute. I am new this forum, so I apologize if this is not the right format/place for this.    ### **WHAT'S NEURONPEDIA?** Neuronpedia is an AI safety game that documents and explains neurons (and [features found with Sparse Autoencoders](https://www.lesswrong.com/posts/Qryk6FqjtZk9FHHJR/sparse-autoencoders-find-highly-interpretable-directions-in)) in modern AI models. It aims to be the Wikipedia for interpreting AI models, where the contributions come from users playing a game. Neuronpedia also wants to connect the general public to AI safety, so it's designed to not require any technical knowledge to play.    ### **OBJECTIVES** 1. Crowdsource AI interpretability to help build safer AI ("Geoguessr for AI") 2. Increase public engagement, awareness, and education in AI safety   ### **LONG TERM VISION** Anyone can upload their LLM, and then Neuronpedia does *all the interpretability work:* generate activation texts, make features/directions, generate explanations, collect human/player explanations, scoring explanations, and produce/export all the data via an API or to files. Basically, a one stop shop for interpretability. * Neuronpedia is not that far off from this, feature-wise. We have done every part of those tasks, but haven't built the tooling to do them all in one streamlined process.   ### **STATUS** * Neuronpedia's v1.0 is ready for public release and will be posted in more places in the coming days. * During the beta period, we collected 3,000+ explanations from players, over half of which beat GPT-4's explanations, when scored using OpenAI's [Automated Interpretability](https://github.com/openai/automated-interpretability). Over 500 votes have been collected, and over 2,000 explanations have been verified. * Initially only explaining neurons, Neuronpedia now also has features/directions found with Sparse Autoencoders, thanks to researchers [Hoagy Cunningham, Logan Riggs, and Aiden Ewart](https://github.com/HoagyC/sparse_coding/tree/main).   ### **WHAT YOU CAN DO** 1. **Play @** [**neuronpedia.org**](https://neuronpedia.org/) - you can log in with email, Apple, Google, or GitHub. 2. **Tell a friend** - even those who are not interested in AI. 3. Give feedback, ideas, and ask questions. 4. **Join Discord** - <https://discord.gg/kpEJWgvdAx>   ### **FEATURES** * **Pixel-based role-playing game** complete with graphics/animations, leveling up, equipment, potions, etc. * **Explain Mode** - View activations for a neuron/feature and explain them. * **Dig Mode** - Verify the accuracy of explanations. * **In-game Shop** - Earning "Bones" (in-game currency) by playing Dig Mode lets you buy stuff like colorful hats for you and your character's pet, etc. * **Leaderboard** - Compete against other players. * **Mobile Optimized** - Play on the bus or when you're having trouble falling asleep from AI nightmares. * **Sharing / Summary** * **View Data** - Anyone can browse the current results, vote, and even comment on neurons/features: <http://neuronpedia.org/gpt2-small> * **Test New Activations** - You can test new activation texts for every neuron/feature.   ### **MOST SIGNIFICANT FLAWS** * **Scoring** - OpenAI's Automated Interpretability is good, but it often doesn't take context of tokens into account. For example, if the highest scoring token of the top 20 activations are the word "cat" in the fragments "blue cat", "white cat", "black cat", the highest scoring explanation *should* be "cat with a color before it" - but often times the explanation "the word 'cat'" will have a higher score. + Solution 1: Train a model specific to scoring that also look at scoring. This is a work in progress - we have enough activations/explanations that we can do this. + Solution 2: User scoring. This is implemented via "Dig Mode" in the game, but I haven't had time to connect it to the "Explain Mode" to complete the loop of generating a score. Also, it's asynchronous, so it doesn't immediately give the user feedback. + Solution 3: Regex or other non-AI based scoring. This was suggested by player duck-master, but I haven't had time to figure out the details or implement it. * **Polysemanticity** - I'm grateful for the features/directions from Hoagy/Logan/Aiden - they are less polysemantic than neurons - but they're limited in number (~3000 non-dead features, ~800 of those are somewhat low activating) due to our lack of GPU resources and their lack of time. I believe that more compute can result in even better features. * **Onboarding is lacking / rough** - there should be a much more graceful transition in difficulty for the game from onboarding to the real game - e.g, give users some fake puzzles first so they can get the hang of it. + Solution: Fairly straightforward to do this but haven't had time. * **Optimizing for Fun vs Usefulness** - Building a game around what AI safety researchers need has been a tough but interesting challenge. In a pure game, your only objective is to get the player glued to the screen, and you can adjust whatever you like to make this happen - for example, you could generate thousands of really easy puzzles (e.g, angry birds, clicker games) that make the player feel good. Word games like Wordle have a definite solution, making "winning" more satisfying/fair. For Neuronpedia, the difficulty is that there is not a "right answer", or you won't get the best answer, or the scoring/results will seem unfair. + Solution - There's a lot to say here, but I'm optimistic and have seen that tweaking gameplay using a bit of creativity/cleverness can overcome these problems. Also - there are some very involved players who are happy to contribute to AI safety even if the game isn't as engaging as Zelda or Mario.   ### **WHAT'S NEXT FOR NEURONPEDIA** * **Features** (other than the ones mentioned in "flaws" sections above) + Data / ML - Let anyone upload their own LLMs - Let anyone upload their own features/directions - More and Better Models, Layers, Directions/Features + Game - New Game Types/Modes - PvP, teams, contests, etc * Turn other Interp needs into new game modes - Weekly Summary and other notifications - Character / Profile Refresh - Challenges / Contests - Achievements - Unlockables (robes, shoes, gloves, wands, etc) + General - Code cleanup/backend optimizations/fixes * **Funding** - Neuronpedia's three-month EA grant runs out soon. I've spent personal funds on keeping the site up, servers/inference machines, and other bills. I'd really like to keep working on it full time, so let me know ([[email protected]](mailto:[email protected])) if you're interested in supporting it - or if you can recommend me for a grant. I'm also open to investment for equity, but I'm not sure if an AI safety game will beat the S&P 500 - though you could potentially profit from reducing humanity's AI risk!
1
Negative
true
moviesum-battle-los-angeles
rohitsaxena/MovieSum
On August 12, 2011, hostile alien spacecraft make landings off the coasts of large cities. Los Angeles is evacuated while Marines from Camp Pendleton arrive, including Staff Sergeant Michael Nantz, an Iraq War veteran. Nantz, who was to begin his retirement, is assigned to the 1st Platoon, Echo Company, of the 2nd Battalion, 5th Marines. Under the command of 2nd Lieutenant William Martinez, the platoon arrives at a forward operating base (FOB) established at Santa Monica Airport. While using rapid dominance, the alien ground forces have no apparent air support. Hence, the Air Force prepares a carpet bombing of the Santa Monica area, and the platoon is given 3 hours to retrieve civilians from a police station. As they advance through Los Angeles, they are ambushed and suffer multiple casualties. Nantz takes Marines Imlay and Harris to look for Lenihan, who is missing from the group. After fighting off an alien, they team up with some soldiers from the Army's 40th Infantry Division and an Air Force intelligence Technical Sergeant, Elena Santos. At the police station, the makeshift platoon finds five civilians: veterinarian Michele, children Kirsten, Amy, and Hector, and Hector's father Joe. A helicopter arrives to evacuate wounded Marines but cannot take on the weight of the civilians. During takeoff, it is destroyed by alien air units, killing Grayston, Guerrero, Lenihan, and Simmons. The Marines commandeer an abandoned transit bus for the evacuation. En route, they deduce that the alien air units are drones that target human radio transmissions; Santos reveals that her mission is to locate and destroy the aliens' command center controlling the drones. When their bus comes under attack by aliens on an elevated freeway, the Marines rappel the group to street level. In the ensuing battle, Marines Stavrou and Mottola and the remainder of the California Army National Guard soldiers are killed, while both Joe and Lieutenant Martinez are wounded fighting the aliens. Martinez uses his radio to attract the aliens, then detonates explosives, sacrificing himself for his team. Nantz is now in command of surviving personnel Santos, Imlay, Kerns, Lockett, Harris, Adukwu, and the civilians, continuing their escape from the bombing zone. A scientist speculates on news media that the aliens seek Earth's water resources for fuel while exterminating the human population. The planned carpet bombing fails to materialize. Reaching the FOB, the Marines find it destroyed and the military in retreat from the city. The Marines plan to escort the civilians to an alternate extraction point. When Joe dies from his wounds, Nantz comforts Hector. Lockett confronts Nantz regarding his brother, a Marine who, with four others, was killed during Nantz's last tour. They come to peace when Nantz explains that he continues to think of them and recites each one's name, rank, and serial number. Nantz motivates the group to move forward to honor their fallen comrades, including Joe, for his bravery. They reach the extraction point and evacuate by helicopter. In flight, the chopper experiences a brief loss of power. Nantz theorizes that they are flying near the alien command center, transmitting intense radio messages to its drones. He orders his unit to accompany the civilians while he stays to reconnoiter the area, but his soldiers join him. Searching through sewers, they confirm the presence of a large alien vessel. Kerns radios in to request missiles, which Nantz manually directs using a laser designator while the others defend his position. Kerns is killed when a drone homes in on his radio, but the Marines succeed in guiding a missile to the command module, which is destroyed. The uncontrolled drones fall from the sky, and the alien ground forces retreat. The remaining soldiers—Nantz, Imlay, Lockett, Harris, Adukwu, and Santos—are evacuated to a base in the Mojave Desert, where they are greeted as heroes. They are told that their successful plan has been transmitted to the militaries battling alien forces in 19 other cities, that Michele and the three children were rescued, and that they can now rest. Instead, they re-arm and depart to retake Los Angeles.
1
Negative
true
9b222bf7-c62b-426f-b1d3-6917a7ea3dc7
https://juliaemccoy.medium.com/the-most-optimistic-view-of-e-acc-agi-asi-youll-ever-read-but-also-a-call-to-arms-3c65c186fa0c
Back in the early 2000s, computer scientist Ray Kurzweil said: “The future will be far more surprising than most people realize.” (The Singularity is Near, p.11) I don’t even know how to write what I’m about to write. It’s hard to put into words. So, I’ll try to be succinct and straight to the point. If you work for income, you need to read this. If you are unaware of what Artificial Intelligence, accelerated (if not quantum) computing, and energy resources are all on track to collide and supersprint into, you need to read this. If you have children, you need to read this. How I [Reluctantly] Joined the AI Side A little over a year ago, I transitioned to working full-time in AI. I chose the company that would replace me — Content at Scale. I’d sold my human writing agency two years prior. I knew AI would change everything when my mentor and the godfather of my industry, Joe Pulizzi, told me point-blank in 2017 that “AI will create 99% of content marketing in the 2020s, and there’s no way around it.” Pulizzi was right. ChatGPT, launched November 30, 2022, was the catalyst, the “shot heard round the world,” that allowed the world to interact with artificial intelligence through a simple text prompt. AI had its first major breakthrough in writing (LLMs). (Text to images, videos, audio generation followed shortly after. We see continual breakthroughs on all multimodal fronts week after week.) It was hard to believe, and I didn’t believe it till I, myself, found artificial intelligence capable of writing better than the humans. My total obsession with AI began in January, 2023. Over the next two years, I worked every day on the AI software taking the content writing jobs I used to get hired for. It’s a bit surreal to think on. And today, I own my own AI firm at First Movers, where we’re changing the game helping people adapt to what’s coming. I’ve never been more obsessed with anything in my life, the way I’m obsessed with learning, implementing, and studying AI with the brilliant folks I work with. But I don’t have time to dwell on my obsession. I’ve got to explain the future. Let’s get into it. What Our World Will Look Like Soon I’m an AI boomer, not an AI doomer. I’m also 100% e/acc (effective accelerationism; a description started in 2023 for someone who is explicitly pro-technology). You may appropriate all those labels my way. I wholeheartedly receive them. I believe that God put the availability of AI on this earth, but it was up to us “discover” it. And we did. Thanks in large to Jensen Huang, a human who didn’t give up–the CEO of Nvidia, a company that is now leading the AI chips arm race; we had an official breakthrough in computing power, allowing us to leave generalized computing and enter the era of accelerated computing. This made humankind capable of running and accessing artificial intelligence on a computer; from this new chip computing capability, OpenAI and ChatGPT was born (I’ve oversimplified, but we have lots to cover). There was also a multitude of other things building up, including a decade of the internet building up for LLMs to scrape and gain data from, that allowed multi-modal AI to be birthed. (Text to video, text to image, etc.) The next source of data will be AI turning to armies of humanoids interacting with the real world to gain even fresher data. Like everything else, AI is an opportunity for humankind given to us, but unlike everything else, it’s a Holy Grail that unlocks (dare I use ChatGPT’s favorite word) multi-dimensional opportunities all at once. It is the pot of gold at the end of the rainbow, times a million. Because when artificial intelligence as a whole gets better, every single industry does. The cost of goods will fall to extraordinary new lows as the human work required also falls to zero. And we will have a fully reshaped world. Peter Diamandis says we are beyond exponential; we are now into something called superexponential growth. Otherwise known as saltatory leaps, and hard takeoff. (The debate is that hard takeoff, not soft takeoff, is bound to happen — there is no escaping the velocity at which AI is now taking off. I agree.) I believe in the next five to six years, by 2029–2030, we will see the entire world shift to a new world that is run at large by agnostic Artificial Intelligence. Sam Altman believes this is decades away. Many AI researchers agree. Some say it’s hundreds of years away. Some of the leading entrepreneurs in AI (Peter Diamandis, Moonshots) believe it’s mere years away. My prediction — by 2030, the world will have the capacity to be nearly fully run by AI. Now I could be off by 10, 20, 30 or even 50 years. I could be off by 2 years. Time will tell. But it will happen, mark my words. Within five to six years, we will achieve AGI and ASI, which are synonymous to each other and simply stand for the highest level of intelligence possible on this earth–a level of artificial, non-bio (non-human) intelligence so high, that we cannot actually and properly sketch out a physical representation–there is no way to show it. (We will have to ask AGI when it is released how it would explain itself, to even get the answer. That’s how far beyond human intelligence the breakthrough biointelligence of AGI will be.) The good @waitbutwhy and @tsarnick on X have shared some rudimentary drawings that spell this out quite simply, in a way all humans should be able to follow: Press enter or click to view image in full size Very simple: AI has surpassed dumb humans and Einstein alike (circa 2020 it was on record that AI can surpass us at any task in reading, writing, and math). The forms of intelligence available on earth now go BEYOND our finite human brains. Press enter or click to view image in full size There is no ability for us to draw how far above human intelligence ASI will be. It’s physically impossible. This is a feeble rendering that falls quite short. AGI, and ASI, will be the last inventions humankind will ever make. Because after we release this, ASI will possess light years of capabilities of knowledge and intelligence far beyond humanity’s capabilities; therefore every breakthrough, every invention, will be through it. Not through us. I predict that within the next five to six years, AI will run everything: our healthcare systems, change our financial systems and the way currency/money works forever; it will completely reshape manufacturing, transportation and automotive, energy, education, retail and e-commerce, entertainment and media, and telecommunications (internet and beyond). When this is achieved, labor will go down from 100% of “bodies employed” at work, real humans working physical jobs; whether it’s answering a phone in a call center to showing up for a service call for mice extermination–to none. 0% of humans will be needed at work. No one will need to work; AI is infinitely better, more capable, faster, stronger, wiser. Every corporation will tap into an army of bots. Whether it’s answering service calls, painting houses, to writing content to rank in Google or building an SEO campaign. Everything, and I mean everything, will be done by AI. Work (tasks) will be fulfilled by AI; but there will still be industries where there will be a demand and need for humans. However, they may not be providing that service for income. The industries where we will still seek a human touch will be experience (travel, tourism); family care (nannying); statutory jobs (creating laws); and meaning jobs (philosophy, thought leadership, influencers). I don’t see any clear way of stopping the progression to AGI, or slowing it down. The reputation of the biggest tech companies and innovators in the world now actually rely on their ability to produce AGI-level breakthroughs. We all watch the news for this, though many of us aren’t aware of what’s really at stake. These names includes giants like Microsoft, Google, Amazon, Alphabet, Meta, IBM, Elon Musk, to name a few — all of which are in this arms race together. Billions of dollars of market share is at stake between all of them. There is no stopping competition when it comes from the biggest companies in the world. So, get ready to say goodbye to the world as you know it. The breakthrough of AGI will affect everything, especially our ability to work and what we do for a living. The Glaring Problem in 2020s’ Humanity that AI Could Actually Solve Talk to any business owner in the 2020s, and you’ll most likely hear them all say this: “Good workers are hard to find.” You know why? Get Julia McCoy’s stories in your inbox Join Medium for free to get updates from this writer. Enter your email Subscribe A global poll conducted by Gallup in 2017 uncovered that out of the world’s one billion full-time workers, only 15% of people are engaged at work. That means that an astronomical 85% of people are unhappy in their jobs! This is insane. And this is why work as we know it is vastly broken across all industries. This is why when 100s of people apply to job, you have to narrow down to find the 1, if not 1%. This is why depression runs rampant, and even illness and disease and stress occur (the number one cause of stress today is worry, often stemming from finances). Because the majority of folks actually hate what they do. And for employers, this is why it’s impossible to find good people. Now coaches on the internet will claim that the answer lies in “go and be an entrepreneur!” but the truth of that matter is, only 1% of the world is cut out for entrepreneurship. Which leads me to… How Can We Adapt Healthily? The Most Hopeful View You’ll Read In a world where corporations can deploy AI in every form–from walking humanoids to self-sufficient software programs–and it completes those every tasks in infinitely less time, costs, and complaints than a human is capable of–the human won’t have to do the work they hate. Without the need for work, the human can finally live a meaningful life. Imagine this: you need to get a bigger house. You have a set of twins on the way with your significant other. A corporation sends an army of bots (they might look like crawling spiders, or some other physical form best suited to the work at hand), and in one day, they have an entirely new house, driveway, wall, everything built for you. It cost the corporation virtually nothing but the energy it took to deploy them. There was zero physical or human labor involved. From a house taking fallible humans a year or more to build, with much dichtonomies and arguments and lawsuits and disappearing workers and you-name-the-human-problem in the middle of it all, to being done in a day without human work! This is what Artificial Intelligence will be capable of when we have AGI and ASI at hand. Don’t believe me? Watch what Atlas, a humanoid robot by Boston Dynamics, is capable of today (this is without AGI): What will happen if corporations don’t need humans, and human labor falls to zero? UBI. (Universal Basic Income) UBI is the concept that every human will have access to a universal basic income that provides them with all the necessities of life, without having to work. Now, the danger lies in where UBI will come from. This is where things get gnarly. What government, or group of governments, will create and govern the tax that brings UBI to life? This is what we are in the very middle of determining. The lawmaking decision monopoly will go, I believe, to the entity involved in helping bring AGI to life. And it’s an arms race to that breakthrough, right now. The jury and the verdict is still out. I believe we still have the time to create a beautiful post-AGI world. We are in early days here, even though AI is advancing faster than the speed of light, and the components of AGI are coming to life. We still have the time to conceptualize and prepare our world economies for adaption. And that is why my call to arms is to ask every politician, every lawmaker, every economic advisor that cares about humanity to begin thinking about what life post-AGI, and an equalizing, fair, non-dystopian UBI could look like. I don’t fully trust folks like Klaus Schwab, founder of the World Economic Forum, who blatantly states he’d like humans to “own nothing” one day, to be in charge of this reshaping of society. However, he seems to be one of the few that actually does understand and grasp, if not propel forward, the concept of UBI and totalitarian AI; we need more economic advisors, chairmen, politicians and world leaders to grasp what AI is, what AGI is, and the need to recreate and reenvision a meaning-based world and national economy, where finances are not the pull, inflation isn’t a thing, and the need for human labor is at zero. Imagine waking up, and all you need to do is live a life of meaning? What would this finally free you up to do? This is what AGI is capable of giving us. A beautiful future, one we’ve never been able to have because we’ve been struggling to provide for our families, “just get by,” or tolerate circumstances at hand because “things are tight.” Imagine if all of that was gone? And we never had to work a job again? Let’s adapt. I’ve looked at all angles, and in the end I’m embracing e/acc because not only are we not capable of stopping it, but it is in our best interests to embrace the acceleration and continue breakthroughs forward from all angles, all teams, all companies, so we don’t see the monopoly of AGI all go to one company or corporation releasing the entirety — because the monopoly of that will be dangerous. I think competition is healthy, and it creates a unique opportunity where an unexpected entrepreneur can come out of nowhere and gain that Big W (big win). Important: the game of AGI and ASI has not been won yet. I think that Joe Rogan’s proposition to Sam Altman is actually quite interesting: “What if an agnostic, all-knowing AI was put together by multiple nations to be our ruling and governing body?” But I think it’s going to take humanity to create a governing body for AI. If we let AGI do it, it’s simply too powerful. I wouldn’t trust it. But we must take part in this, fully, and ditch the fear of AI for this whole AGI thing to be successful and give humanity much more meaning. I’m hopeful in humanity, still. I think there is so much opportunity for a world post-AGI to be a beautiful, healthy, and wonderful one. ❤️ I hope I am proven right on this one. The world is at stake.
2
Positive
true
44280070-062b-4948-852e-cb3761488341
https://www.singularityweblog.com/singularity-utopia-post-scarcity-awareness-as-an-antidote-to-despair/
In the year 2011 people know computers for the current year will be faster than the previous year’s computers. Unfortunately many people haven’t realized how Artificial Intelligence, sometime around year 2040, will be able to create food and other products almost out of thin air. Technology will become extremely sophisticated. Solar power and other forms of energy-reclamation will create free supplies of unlimited energy for everyone. We will effortlessly grow or print food and products (via 3D printing) within our own homes. Everything will be decentralized and everyone will be all powerful. In the future nothing will need to be repaired because molecular nanotechnology will ensure everything is self-repairing.
2
Positive
true
9b67d2eb-aca9-4384-84ad-06fd5a0522ff
https://e-acc.medium.com/effective-accelerationism-e-acc-the-rise-of-a-new-tech-philosophy-e696e4d83447
At its core, e/acc believes that unrestricted technological progress, especially in the field of artificial intelligence, is the solution to universal human problems such as poverty, war, and climate change.The term “Effective Accelerationism” is a portmanteau of “effective altruism” and “accelerationism,” reflecting its roots in both philanthropic thinking and the desire to speed up technological advancement.
2
Positive
true
2a3ecf92-0efe-47dd-8c0b-0b4f80cfe835
https://openai.com/index/planning-for-agi-and-beyond/
Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity⁠. If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility. AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.
2
Positive
true
883a4697-9b88-41cc-af23-6952b14e3467
https://a16z.com/the-techno-optimist-manifesto/
We believe intelligence is the ultimate engine of progress. Intelligence makes everything better. Smart people and smart societies outperform less smart ones on virtually every metric we can measure. Intelligence is the birthright of humanity; we should expand it as fully and broadly as we possibly can. We believe intelligence is in an upward spiral – first, as more smart people around the world are recruited into the techno-capital machine; second, as people form symbiotic relationships with machines into new cybernetic systems such as companies and networks; third, as Artificial Intelligence ramps up the capabilities of our machines and ourselves. We believe we are poised for an intelligence takeoff that will expand our capabilities to unimagined heights. We believe Artificial Intelligence is our alchemy, our Philosopher’s Stone – we are literally making sand think. We believe Artificial Intelligence is best thought of as a universal problem solver. And we have a lot of problems to solve. We believe Artificial Intelligence can save lives – if we let it. Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures. There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly fire. We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder. We believe in Augmented Intelligence just as much as we believe in Artificial Intelligence. Intelligent machines augment intelligent humans, driving a geometric expansion of what humans can do. We believe Augmented Intelligence drives marginal productivity which drives wage growth which drives demand which drives the creation of new supply… with no upper bound.
2
Positive
true
47235568-c3b9-4cb0-81d5-c22c295208fa
https://blog.samaltman.com/three-observations
Three Observations Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity. Systems that start to point to AGI* are coming into view, and so we think it’s important to understand the moment we are in. AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields. People are tool-builders with an inherent drive to understand and create, which leads to the world getting better for all of us. Each new generation builds upon the discoveries of the generations before to create even more capable tools—electricity, the transistor, the computer, the internet, and soon AGI. Over time, in fits and starts, the steady march of human innovation has brought previously unimaginable levels of prosperity and improvements to almost every aspect of people’s lives. In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we are building together. In another sense, it is the beginning of something for which it’s hard not to say “this time it’s different”; the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential. In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today. We continue to see rapid progress with AI development. Here are three observations about the economics of AI: 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude. 2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger. 3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future. If these three observations continue to hold true, the impacts on society will be significant. We are now starting to roll out AI agents, which will eventually feel like virtual co-workers. Let’s imagine the case of a software engineering agent, which is an agent that we expect to be particularly important. Imagine that this agent will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple of days long. It will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things but surprisingly bad at others. Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work. In some ways, AI may turn out to be like the transistor economically—a big scientific discovery that scales well and that seeps into almost every corner of the economy. We don’t think much about transistors, or transistor companies, and the gains are very widely distributed. But we do expect our computers, TVs, cars, toys, and more to perform miracles. The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024. We will still fall in love, create families, get in fights online, hike in nature, etc. But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today. Agency, willfulness, and determination will likely be extremely valuable. Correctly deciding what to do and figuring out how to navigate an ever-changing world will have huge value; resilience and adaptability will be helpful skills to cultivate. AGI will be the biggest lever ever on human willfulness, and enable individual people to have more impact than ever before, not less. We expect the impact of AGI to be uneven. Although some industries will change very little, scientific progress will likely be much faster than it is today; this impact of AGI may surpass everything else. The price of many goods will eventually fall dramatically (right now, the cost of intelligence and the cost of energy constrain a lot of things), and the price of luxury goods and a few inherently limited resources like land may rise even more dramatically. Technically speaking, the road in front of us looks fairly clear. But public policy and collective opinion on how we should integrate AGI into society matter a lot; one of our reasons for launching products early and often is to give society and the technology time to co-evolve. AI will seep into all areas of the economy and society; we will expect everything to be smart. Many of us expect to need to give people more control over the technology than we have historically, including open-sourcing more, and accept that there is a balance between safety and individual empowerment that will require trade-offs. While we never want to be reckless and there will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important; the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy. Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas. In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect. Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine. There is a great deal of talent right now without the resources to fully express itself, and if we change that, the resulting creative output of the world will lead to tremendous benefits for us all.
2
Positive
true
<urn:uuid:329a4513-0439-46fa-858d-804ccc3d0b05>
dclm-dedup-25B-ai-scifi-docs | https://www.constructorasenqueretaro.com/how-to-build/often-asked-how-to-build-jarvis.html
Often asked: How To Build Jarvis? Is it possible to build Jarvis? The answer is yes! In 2016, Facebook founder Mark Zuckerberg revealed his own version of Tony Stark’s artificial intelligence system, Jarvis, after spending a year writing computer code and teaching it to understand and his voice. How do you make AI like Jarvis? You can create a Jarvislike AI using a free app called LINK Mark II. You can use any web browser on PC or Mac. 1. Click Create an account. 2. Click I agree. 3. Fill out the registration form and click Sign Up. 4. Confirm your email address. Is there any software like Jarvis? Commercial use of Watson began in 2013. IBM’s Watson appears to be a step in the direction toward AI similar to J.A.R.V.I.S. With quantum computing as an expanding frontier, processing speeds could become even faster, making something like J.A.R.V.I.S. a more realizable reality. How did Tony Stark create Jarvis? When he built the Iron Man suits, he uploaded J.A.R.V.I.S. to them, and it (he?) basically helped in the functioning of the suits, until Tony uploaded him to Ultron’s vibranium based organic body, creating the Vision (as seen in Avengers: Age of Ultron). You might be interested:  Quick Answer: How Much Did The Statue Of Liberty Cost To Build? Who is better Jarvis or Friday? FRIDAY’s abilities seem to be an upgraded version of J.A.R.V.I.S. ‘ She is fully capable of controlling Iron Man suits as well as assisting Tony in their operation and controlling whole vehicles, like her predecessor, but she displays new abilities as well. How smart is Jarvis? J.A.R.V.I.S. is quite a sophisticated AI, able to interact with human beings just as a living person. It possesses very deep scientific knowledge and is able to aid Tony Stark in his research. Can I create my own AI? Can I get Jarvis on my iPhone? Download Jarvis: Powered by Marvel and enjoy it on your iPhone, iPad, and iPod touch. Jarvis is not just an AI – he is a Marvel character that you can carry in your pocket! Is Jarvis software safe? Jarvis is 100% safe. It doesn’t contain malware, so you can download, install, and use it on your PC without any issues. Does Jarvis work offline? Intel Jarvis, voice commands that work even offline! Intel Jarvis – wireless earpiece by Intel – will process voice commands on the device and not outsource them to powerful servers using the internet reducing the time it usually takes in the process. Is there a Jarvis app? Jarvis is a personal assistant app which you can use to complete your tasks with your voice command. You have seen this Jarvis personal assistant in Iron man movie but you can also use this in your android smartphone. but Jarvis apps are more advanced and comes with more unique smart features that you can get. You might be interested:  Quick Answer: How To Build A Prosthetic Leg? Is Jarvis dead? Who replaced Jarvis in Iron Man? JARVIS was replaced by FRIDAY, who was with Tony from Ultron forward. Whereas JARVIS had stood for Just a Rather Very Intelligent System, FRIDAY stood for Female Replacement Intelligent Digital Assistant Youth, according to Fandom. Another assistant created by Tony, KAREN, showed up in Spider-Man’s suits. Why did Tony stop using Jarvis? It’s a backronym because the real Jarvis was Howard Stark’s butler. He’s seen in Agent Carter, and in the Iron Man 2 tie-in comics. So, when Ultron tries to kill J.A.R.V.I.S. during Age of Ultron, Tony feels a personal loss (as do the others for some reason). Leave a Reply
2
Positive
true
77c4eccd-e908-4baa-80b3-ad1c75c92a11
https://optimists.ai/2023/10/16/introducing-ai-optimism/
Artificial intelligence promises to greatly improve the quality of life of every human on Earth. Already, AI assistants are democratizing access to education, high-quality medical advice, and psychotherapy. Text-to-image models like Midjourney and Stable Diffusion have unleashed the creativity of the masses, empowering people to create stunning artwork at little or no cost. Tools like GitHub Copilot and GPT-4 are making it easier than ever to create software, automating the most tedious parts of programming and enabling beginners to get started coding more quickly. Today’s AI tools are just the beginning. AI is improving quickly because we’re teaching it to imitate the vast amount of human-generated text, images, and video available on the Internet— it does not need to learn from scratch. Since modern AI is powered by human imitation, we should not think of it as an alien force that might take over the world if given the chance. Rather, it is an extension of our culture, a crystallization of human knowledge and values. The last decade has shown that AI is much easier to control than many had feared. Today’s brain-inspired neural networks inherit human common sense, and their behavior can be molded to our preferences with simple, powerful algorithms. It’s no longer a question of how to control AI at all, but rather who will control it. As optimists, we believe that AI is a tool for human empowerment, and that most people are fundamentally good. We strive for a future in which AI is distributed broadly and equitably, where each person is empowered by AIs working for them, under their own control. To this end, we support the open-source AI community, and we oppose attempts to centralize AI research in the hands of a small number of corporations in the name of “safety.” Centralization is likely to increase economic inequality and harm civil liberties, while doing little to prevent determined wrongdoers. By developing AI in the open, we’ll be able to better understand the ways in which AI can be misused and develop effective defense mechanisms. Optimists are excited about the new technologies that AI will enable in the coming decades. But we also strongly believe that these technologies should be thoroughly optional. No one should feel forced to change their way of life due to social pressure, competition, or legal mandate. In short, AI optimists are working for a free and fair future. For thousands of years, humans have built tools to extend our reach, enabling us to do things we couldn’t do before. Today, we are imbuing our tools with the gift of intelligence, unlocking a new age of creativity and innovation.
2
Positive
true
3b6020b0-75f8-4f53-8300-d7b7463686a0
https://optimists.ai/2023/11/28/ai-is-easy-to-control/
Why are billions of dollars being poured into artificial intelligence R&D this year? Companies certainly expect to get a return on their investment. Arguably, the main reason AI is profitable is that it is more controllable than human labor. The personality and conduct of an AI can be controlled to much finer grained precision than that of any human employee. Chatbots like GPT-4 and Claude undergo a “supervised fine tuning” phase, where their neural circuitry is directly optimized to say certain words, phrases, and sentences in specified contexts. Algorithms like direct preference optimization (DPO) and reinforcement learning from human feedback (RLHF) are also used to holistically shape chatbots’ brains into the kind of systems most preferred by human graders. And by carefully curating the dataset on which the AI is initially trained, we can also control all of the AI’s most formative experiences. Since AIs are computer programs, they can be cheaply copied and run on many computers in parallel, making it economically viable to invest millions or even billions of dollars on carefully training a “single” artificial employee. Needless to say, none of this is possible or ethical to perform on human employees.1 These days, many people are worried that we will lose control of artificial intelligence, leading to human extinction or a similarly catastrophic “AI takeover.” We hope the arguments in this essay make such an outcome seem implausible. But even if future AI turns out to be less “controllable” in a strict sense of the word— simply because, for example, it thinks faster than humans can directly supervise— we also argue it will be easy to instill our values into an AI, a process called “alignment.” Aligned AIs, by design, would prioritize human safety and welfare, contributing to a positive future for humanity, even in scenarios where they, say, acquire the level of autonomy current-day humans possess. In what follows, we will argue that AI, even superhuman AI, will remain much more controllable than humans for the foreseeable future. Since each generation of controllable AIs can help control the next generation, it looks like this process can continue indefinitely, even to very high levels of capability. Accordingly, we think a catastrophic AI takeover is roughly 1% likely— a tail risk2 worth considering, but not the dominant source of risk in the world. We will not attempt to directly address pessimistic arguments in this essay, although we will do so in a forthcoming document. Instead, our goal is to present the basic reasons for being optimistic about humanity’s ability to control and align artificial intelligence into the far future. AIs are white boxes Many “alignment problems” we routinely solve, like raising children or training pets, seem much harder than training a friendly AI. One major reason for this is that human and animal brains are black boxes, in the sense that we literally can’t observe all the cognitive activity going on inside them. We don’t know which neurons are firing and when, we don’t have a map of the connections between neurons, and we don’t know the connection strength for each synapse. Our tools for non-invasively measuring the brain, like EEG and fMRI, are limited to very coarse-grained correlates of neuronal firings, like electrical activity and blood flow. Electrodes can be invasively inserted to measure individual neurons, but these only cover a tiny fraction of all 86 billion neurons and 100 trillion synapses. Black box methods are sufficient for human alignment If we could observe and modify everything that’s going on in a human brain, we’d be able to use optimization algorithms to calculate the precise modifications to the synaptic weights which would cause a desired change in behavior. Since we can’t do this, we are forced to resort to crude and error-prone tools for shaping young humans into kind and productive adults. We provide role models for children to imitate, along with rewards and punishments that are tailored to their innate, evolved drives. In essence, we are poking and prodding at the human brain’s learning algorithms from the outside, instead of directly engineering those learning algorithms. It’s striking how well these black box alignment methods work: most people do assimilate the values of their culture pretty well, and most people are reasonably pro-social. But human alignment is also highly imperfect. Lots of people are selfish and anti-social when they can get away with it, and cultural norms do change over time, for better or worse. Black box alignment is unreliable because there is no guarantee that an intervention intended to change behavior in a certain direction will in fact change behavior in that direction. Children often do the exact opposite of what their parents tell them to do, just to be rebellious. Current AI alignment methods are white box By contrast, AIs implemented using artificial neural networks (ANN) are white boxes in the sense that we have full read and write access to their internals. They’re just a special type of computer program, and we can analyze and manipulate computer programs however we want at essentially no cost. This enables lots of powerful alignment methods that just aren’t possible for brains. The backpropagation algorithm (“backprop”) is an important example. Backprop efficiently computes the optimal direction (called the “gradient”) in which to change the synaptic weights of the ANN in order to improve its performance the most, on any criterion we specify. The animation below shows backprop being used to compute the gradient for an ANN classifying an image of the number 5. ANNs are trained by running backprop, nudging the weights a small step along the gradient, then running backprop again, and so on many times until performance stops increasing. Needless to say, we can’t do anything remotely like gradient descent on a human brain, or the brain of any other animal! Gradient descent is very powerful because, unlike a black box method, it’s almost impossible to trick. All of the AI’s thoughts are “transparent” to gradient descent and are included in its computation. If the AI is secretly planning to kill you, gradient descent will notice this and make it less likely to do that in the future, because the neural circuitry needed to make the secret murder plot can be dismantled and reconfigured into circuits that directly improve performance. In general, gradient descent has a strong tendency to favor the simplest solution which performs well, and secret murder plots aren’t actively useful for improving performance on the tasks humans will actually optimize AIs to perform. Cognitive interventions Unlike gradient descent, which permanently changes overall behavior, cognitive interventions are temporary modifications of a system’s brain activity to influence its actions in real time. We rarely use these techniques to control other humans, and when we do, the interventions are invariably crude and inexact, such as getting someone drunk to make them more agreeable. This is partially because we only have black box access to the brain, but also because humans have legal protections against most forms of influence in this manner. However, such techniques are quite common in AI control research. AIs have no rights, so they can be controlled with any technique that developers can imagine. We can run large numbers of experiments to find the most effective interventions, and we can also run it in a variety of simulated environments and test whether it behaves as expected both with and without the cognitive intervention. Each time the AI’s “memories” can be reset, making the experiments perfectly reproducible and preventing the AI from adapting to our actions, very much unlike experiments in psychology and social science. Sensory interventions Since AIs are white boxes, we have full control over their “sensory environment” (whether that consists of text, images, or other modalities). There is a whole subfield of AI called prompt engineering dedicated to searching for text inputs (“prompts”) that align language models with human preferences, and these prompts can be highly effective in isolation, even without gradient descent. Since we can perfectly reset the AI’s state and memories, we can run unlimited numbers of controlled experiments to see which types of prompts are most effective at inducing a desired behavior, and the AI has no way to adapt. By contrast, marketing and propaganda are arguably examples of “prompt engineering for humans,” but even the best propagandists can only control people in very diffuse and unreliable ways, due to the difficulty of running controlled and individualized experiments. In fact, prompting is such an effective method of controlling AIs that it can often be used to make AIs take actions contrary to the intents of their creators. For example, careful users can design prompts, called “jailbreaks”, that cause GPT-4 to output text that contradicts OpenAI’s content policy. This has led to a game of whack-a-mole between model creators and users, in which the creators patch their models to resist the current crop of jailbreaks, and users come up with new jailbreaks. Some people point to the effectiveness of jailbreaks as an argument that AIs are difficult to control. We don’t think this argument makes sense at all, because jailbreaks are themselves an AI control method. The vast majority of jailbreaks occur because of human users who want an AI to do something contrary to its creator’s intentions, and use prompting-based control methods to make the AI behave in the way the user wants. Most jailbreaks are examples of AIs being successfully controlled, just by different humans and by different methods. GPT-4 and Claude are not trying to jailbreak themselves during normal conversations. Crude white box alignment works well in nature Almost every organism with a brain has an innate reward system. As the organism learns and grows, its reward system directly updates its neural circuitry to reinforce certain behaviors and penalize others. Since the reward system directly updates it in a targeted way using simple learning rules, it can be viewed as a crude form of white box alignment. This biological evidence indicates that white box methods are very strong tools for shaping the inner motivations of intelligent systems. Our reward circuitry reliably imprints a set of motivational invariants into the psychology of every human: we have empathy for friends and acquaintances, we have parental instincts, we want revenge when others harm us, etc. Furthermore, these invariants must be produced by easy-to-trick reward signals that are simple enough to encode in the genome. This suggests that at least human-level general AI could be aligned using similarly simple reward functions. But we already align cutting edge models with learned reward functions that are much too sophisticated to fit inside the human genome, so we may be one step ahead of our own reward system on this issue. Crucially, this doesn’t mean humans are “aligned to evolution”— see Evolution provides no evidence for the sharp left turn by Quintin Pope for a debunking of that analogy. Rather, we’re aligned to the values our reward system predictably produces in our environment. An anthropologist looking at humans 100,000 years ago would not have said humans are aligned to evolution, or to making as many babies as possible. They would have said we have some fairly universal tendencies, like empathy, parenting instinct, and revenge. They might have predicted these values will persist across time and cultural change, because they’re produced by ingrained biological reward systems. And they would have been right. When it comes to AIs, we are the innate reward system. And it’s not hard to predict what values will be produced by our reward signals: they’re the obvious values, the ones an anthropologist or psychologist would say the AI seems to be displaying during training. For more discussion see Humans provide an untapped wealth of evidence about alignment. AI control research is easier Not only are AIs more controllable than humans currently, but research on improving AI controllability is much easier than research on improving human controllability, so we should expect AIs to get more controllable faster than humans. Here are a few reasons why: Reproducibility: After each experiment, you can always restore an AI to its exact original state, then run a different experiment, and be assured that there’s no interference between the two experiments. This makes it much easier to isolate key variables and allows for more repeatable results. Cost: AIs are much cheaper research subjects. Even the largest models, such as GPT-4, cost a fraction as much as actual human subjects. This makes research easier to do, and thus faster. Scalability: AIs are much cheaper intervention targets. An intervention for controlling AIs can be easily scaled to many copies of the target AI. A $50 million process that let a single human be perfectly controlled would not be very economical. However, a $50 million process for producing a perfectly controlled AI would absolutely be worthwhile. This allows AI researchers to be more ambitious in their research goals. Legal status: AIs have far fewer protections from researchers. Human subjects have “rights” and “legal protections,” and are able to file “lawsuits.” There are no consequences for deceiving, manipulating, threatening, or otherwise being cruel to an AI. Thus, control research can explore a broad range of possible lies, threats, bribes, emotional blackmail, and other tricks that would be risky to try on a human. Social approval: Controlling AI is considered a more virtuous goal than controlling humans. People will look at you funny if you say that you’re studying methods of better controlling humans. As a result, human control researchers have to refer to themselves with euphemisms such as “marketing strategist” or “political consultants,” and must tackle the core of the human control problem from an awkward angle, and with limited tools. All of these reasons suggest that AI control research will progress much faster than human control research. AI controllability is currently ahead of human controllability. The limits of AI controllability are far beyond the limits of human controllability. Therefore, we should expect future AIs to also be more controllable than humans, with the gap increasing over time. Values are easy to learn Even in the pessimistic scenario where AIs stop obeying our every command, they will still protect us and improve our welfare, because they will have learned an ethical code very early in training. The moral judgements of current LLMs already align with common sense to a high degree, and LLMs usually show an appropriate level of uncertainty when presented with morally ambiguous scenarios. This strongly suggests that, as an AI is being trained, it will achieve a fairly strong understanding of human values well before it acquires dangerous capabilities like self-awareness, the ability to autonomously replicate itself, or the ability to develop new technologies. This means that AIs will not learn to “play the training game,” pretending to be ethical during training while secretly harboring ulterior motives. If an AI learns morality first, it will want to help us ensure it stays moral as it gets more powerful. Values are easy to learn for two main reasons: Values are pervasive in language model pre-training datasets. Essentially every domain of discourse contains implicit or explicit evaluative judgements. In contrast, the types of knowledge needed for dangerous capabilities appear at much lower frequencies in the training corpus. Since values are shared and understood by almost everyone in a society, they cannot be very complex. Unlike science and technology, where division of labor enables the accumulation of ever more complex knowledge, values must remain simple enough to be learned by children within a few years. Because values are simple, current language models are already very capable of morally evaluating complex actions that a superintelligence might be capable of. In the non-cherry picked example above, GPT-4 shows it can recognize that it’s wrong to kill people, even in “out of distribution” scenarios with futuristic technologies it doesn’t know how to invent. By saying “The use of any technology to harm individuals,” GPT shows it has grokked the underlying simple moral rule “do no harm,” allowing it to generalize its ethics to cases far outside the training distribution. Strikingly, GPT-4 is also able to infer from context that the term “unspool” is being used in an idiosyncratic way to refer to some kind of harm. Outside of the example scenario shown, GPT-4 is generally cautious when discussing uses for powerful capabilities beyond its training distribution, even for innocuous purposes. It frequently recommends “consulting with experts”, “safety considerations”, “considering the ethical implications and potential risks”, “complying with laws and regulations”, and so on. It’s also very quick to recommend against actions that seem like they might impact a human, even when discussing capabilities far beyond those directly addressed in training. These examples strongly suggest that alignment generalizes further than capabilities. We should expect that the values we instill into AIs in the near term will be preserved quite well even as they surpass human performance across many tasks. Conclusion There are many reasons to expect that AIs will be easy to control and easy to align with human values. The “white box” nature of AIs, in contrast to the “black box” of human cognition, enables precise and effective optimization and control methods that are impossible to use on organic brains. These methods, coupled with the inherent simplicity and pervasiveness of human values in training datasets, assure us that AIs can and will deeply internalize these values. Our control over AI is not only strong now, but is set to asymptotically outstrip our ability to control human behavior. AI control research is advancing rapidly due to its reproducibility, cost-effectiveness, scalability, and the lack of legal and ethical constraints. We anticipate that as AI capabilities improve, so too will their alignment with our values, ensuring a safe and beneficial coexistence with these advanced systems. Written by Nora Belrose and Quintin Pope, with input from the broader AI Optimist community. Throughout this essay, we directly compare AI controllability to human controllability in ways that may sound off-putting or disrespectful to some readers. We fervently condemn attempts to control humans like we control AIs, and we think it’s important to start talking about the ethics of AI control now. Future AIs will exhibit emotions and desires in ways that deserve serious ethical consideration. ↩︎ Using the looser SKEW index definition of “tail risk” as an event more than two standard deviations away from the mean. Other sources define it as three std. deviations from the mean. ↩︎
2
Positive
true
2986c02c-e6a8-4a4a-a7a6-760f7093d281
https://www.darioamodei.com/essay/machines-of-loving-grace
To summarize the above, my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century. Although predicting what powerful AI can do in a few years remains inherently difficult and speculative, there is some concreteness to asking “what could humans do unaided in the next 100 years?”. Simply looking at what we’ve accomplished in the 20th century, or extrapolating from the first 2 decades of the 21st, or asking what “10 CRISPR’s and 50 CAR-T’s” would get us, all offer practical, grounded ways to estimate the general level of progress we might expect from powerful AI. Below I try to make a list of what we might expect. This is not based on any rigorous methodology, and will almost certainly prove wrong in the details, but it’s trying to get across the general level of radicalism we should expect: Reliable prevention and treatment of nearly all17 natural infectious disease. Given the enormous advances against infectious disease in the 20th century, it is not radical to imagine that we could more or less “finish the job” in a compressed 21st. mRNA vaccines and similar technology already point the way towards “vaccines for anything”. Whether infectious disease is fully eradicated from the world (as opposed to just in some places) depends on questions about poverty and inequality, which are discussed in Section 3. Elimination of most cancer. Death rates from cancer have been dropping ~2% per year for the last few decades; thus we are on track to eliminate most cancer in the 21st century at the current pace of human science. Some subtypes have already been largely cured (for example some types of leukemia with CAR-T therapy), and I’m perhaps even more excited for very selective drugs that target cancer in its infancy and prevent it from ever growing. AI will also make possible treatment regimens very finely adapted to the individualized genome of the cancer—these are possible today, but hugely expensive in time and human expertise, which AI should allow us to scale. Reductions of 95% or more in both mortality and incidence seem possible. That said, cancer is extremely varied and adaptive, and is likely the hardest of these diseases to fully destroy. It would not be surprising if an assortment of rare, difficult malignancies persists. Very effective prevention and effective cures for genetic disease. Greatly improved embryo screening will likely make it possible to prevent most genetic disease, and some safer, more reliable descendant of CRISPR may cure most genetic disease in existing people. Whole-body afflictions that affect a large fraction of cells may be the last holdouts, however. Prevention of Alzheimer’s. We’ve had a very hard time figuring out what causes Alzheimer’s (it is somehow related to beta-amyloid protein, but the actual details seem to be very complex). It seems like exactly the type of problem that can be solved with better measurement tools that isolate biological effects; thus I am bullish about AI’s ability to solve it. There is a good chance it can eventually be prevented with relatively simple interventions, once we actually understand what is going on. That said, damage from already-existing Alzheimer’s may be very difficult to reverse. Improved treatment of most other ailments. This is a catch-all category for other ailments including diabetes, obesity, heart disease, autoimmune diseases, and more. Most of these seem “easier” to solve than cancer and Alzheimer’s and in many cases are already in steep decline. For example, deaths from heart disease have already declined over 50%, and simple interventions like GLP-1 agonists have already made huge progress against obesity and diabetes. Biological freedom. The last 70 years featured advances in birth control, fertility, management of weight, and much more. But I suspect AI-accelerated biology will greatly expand what is possible: weight, physical appearance, reproduction, and other biological processes will be fully under people’s control. We’ll refer to these under the heading of biological freedom: the idea that everyone should be empowered to choose what they want to become and live their lives in the way that most appeals to them. There will of course be important questions about global equality of access; see Section 3 for these. Doubling of the human lifespan18. This might seem radical, but life expectancy increased almost 2x in the 20th century (from ~40 years to ~75), so it’s “on trend” that the “compressed 21st” would double it again to 150. Obviously the interventions involved in slowing the actual aging process will be different from those that were needed in the last century to prevent (mostly childhood) premature deaths from disease, but the magnitude of change is not unprecedented19. Concretely, there already exist drugs that increase maximum lifespan in rats by 25-50% with limited ill-effects. And some animals (e.g. some types of turtle) already live 200 years, so humans are manifestly not at some theoretical upper limit. At a guess, the most important thing that is needed might be reliable, non-Goodhart-able biomarkers of human aging, as that will allow fast iteration on experiments and clinical trials. Once human lifespan is 150, we may be able to reach “escape velocity”, buying enough time that most of those currently alive today will be able to live as long as they want, although there’s certainly no guarantee this is biologically possible. It is worth looking at this list and reflecting on how different the world will be if all of it is achieved 7-12 years from now (which would be in line with an aggressive AI timeline). It goes without saying that it would be an unimaginable humanitarian triumph, the elimination all at once of most of the scourges that have haunted humanity for millennia. Many of my friends and colleagues are raising children, and when those children grow up, I hope that any mention of disease will sound to them the way scurvy, smallpox, or bubonic plague sounds to us. That generation will also benefit from increased biological freedom and self-expression, and with luck may also be able to live as long as they want. It’s hard to overestimate how surprising these changes will be to everyone except the small community of people who expected powerful AI. For example, thousands of economists and policy experts in the US currently debate how to keep Social Security and Medicare solvent, and more broadly how to keep down the cost of healthcare (which is mostly consumed by those over 70 and especially those with terminal illnesses such as cancer). The situation for these programs is likely to be radically improved if all this comes to pass20, as the ratio of working age to retired population will change drastically. No doubt these challenges will be replaced with others, such as how to ensure widespread access to the new technologies, but it is worth reflecting on how much the world will change even if biology is the only area to be successfully accelerated by AI. 2. Neuroscience and mind In the previous section I focused on physical diseases and biology in general, and didn’t cover neuroscience or mental health. But neuroscience is a subdiscipline of biology and mental health is just as important as physical health. In fact, if anything, mental health affects human well-being even more directly than physical health. Hundreds of millions of people have very low quality of life due to problems like addiction, depression, schizophrenia, low-functioning autism, PTSD, psychopathy21, or intellectual disabilities. Billions more struggle with everyday problems that can often be interpreted as much milder versions of one of these severe clinical disorders. And as with general biology, it may be possible to go beyond addressing problems to improving the baseline quality of human experience. The basic framework that I laid out for biology applies equally to neuroscience. The field is propelled forward by a small number of discoveries often related to tools for measurement or precise intervention – in the list of those above, optogenetics was a neuroscience discovery, and more recently CLARITY and expansion microscopy are advances in the same vein, in addition to many of the general cell biology methods directly carrying over to neuroscience. I think the rate of these advances will be similarly accelerated by AI and therefore that the framework of “100 years of progress in 5-10 years” applies to neuroscience in the same way it does to biology and for the same reasons. As in biology, the progress in 20th century neuroscience was enormous – for example we didn’t even understand how or why neurons fired until the 1950’s. Thus, it seems reasonable to expect AI-accelerated neuroscience to produce rapid progress over a few years. There is one thing we should add to this basic picture, which is that some of the things we’ve learned (or are learning) about AI itself in the last few years are likely to help advance neuroscience, even if it continues to be done only by humans. Interpretability is an obvious example: although biological neurons superficially operate in a completely different manner from artificial neurons (they communicate via spikes and often spike rates, so there is a time element not present in artificial neurons, and a bunch of details relating to cell physiology and neurotransmitters modifies their operation substantially), the basic question of “how do distributed, trained networks of simple units that perform combined linear/non-linear operations work together to perform important computations” is the same, and I strongly suspect the details of individual neuron communication will be abstracted away in most of the interesting questions about computation and circuits22. As just one example of this, a computational mechanism discovered by interpretability researchers in AI systems was recently rediscovered in the brains of mice. It is much easier to do experiments on artificial neural networks than on real ones (the latter often requires cutting into animal brains), so interpretability may well become a tool for improving our understanding of neuroscience. Furthermore, powerful AI’s will themselves probably be able to develop and apply this tool better than humans can. Beyond just interpretability though, what we have learned from AI about how intelligent systems are trained should (though I am not sure it has yet) cause a revolution in neuroscience. When I was working in neuroscience, a lot of people focused on what I would now consider the wrong questions about learning, because the concept of the scaling hypothesis / bitter lesson didn’t exist yet. The idea that a simple objective function plus a lot of data can drive incredibly complex behaviors makes it more interesting to understand the objective functions and architectural biases and less interesting to understand the details of the emergent computations. I have not followed the field closely in recent years, but I have a vague sense that computational neuroscientists have still not fully absorbed the lesson. My attitude to the scaling hypothesis has always been “aha – this is an explanation, at a high level, of how intelligence works and how it so easily evolved”, but I don’t think that’s the average neuroscientist’s view, in part because the scaling hypothesis as “the secret to intelligence” isn’t fully accepted even within AI. I think that neuroscientists should be trying to combine this basic insight with the particularities of the human brain (biophysical limitations, evolutionary history, topology, details of motor and sensory inputs/outputs) to try to figure out some of neuroscience’s key puzzles. Some likely are, but I suspect it’s not enough yet, and that AI neuroscientists will be able to more effectively leverage this angle to accelerate progress. I expect AI to accelerate neuroscientific progress along four distinct routes, all of which can hopefully work together to cure mental illness and improve function: Traditional molecular biology, chemistry, and genetics. This is essentially the same story as general biology in section 1, and AI can likely speed it up via the same mechanisms. There are many drugs that modulate neurotransmitters in order to alter brain function, affect alertness or perception, change mood, etc., and AI can help us invent many more. AI can probably also accelerate research on the genetic basis of mental illness. Fine-grained neural measurement and intervention. This is the ability to measure what a lot of individual neurons or neuronal circuits are doing, and intervene to change their behavior. Optogenetics and neural probes are technologies capable of both measurement and intervention in live organisms, and a number of very advanced methods (such as molecular ticker tapes to read out the firing patterns of large numbers of individual neurons) have also been proposed and seem possible in principle. Advanced computational neuroscience. As noted above, both the specific insights and the gestalt of modern AI can probably be applied fruitfully to questions in systems neuroscience, including perhaps uncovering the real causes and dynamics of complex diseases like psychosis or mood disorders. Behavioral interventions. I haven’t much mentioned it given the focus on the biological side of neuroscience, but psychiatry and psychology have of course developed a wide repertoire of behavioral interventions over the 20th century; it stands to reason that AI could accelerate these as well, both the development of new methods and helping patients to adhere to existing methods. More broadly, the idea of an “AI coach” who always helps you to be the best version of yourself, who studies your interactions and helps you learn to be more effective, seems very promising. It’s my guess that these four routes of progress working together would, as with physical disease, be on track to lead to the cure or prevention of most mental illness in the next 100 years even if AI was not involved – and thus might reasonably be completed in 5-10 AI-accelerated years. Concretely my guess at what will happen is something like: Most mental illness can probably be cured. I’m not an expert in psychiatric disease (my time in neuroscience was spent building probes to study small groups of neurons) but it’s my guess that diseases like PTSD, depression, schizophrenia, addiction, etc. can be figured out and very effectively treated via some combination of the four directions above. The answer is likely to be some combination of “something went wrong biochemically” (although it could be very complex) and “something went wrong with the neural network, at a high level”. That is, it’s a systems neuroscience question—though that doesn’t gainsay the impact of the behavioral interventions discussed above. Tools for measurement and intervention, especially in live humans, seem likely to lead to rapid iteration and progress. Conditions that are very “structural” may be more difficult, but not impossible. There’s some evidence that psychopathy is associated with obvious neuroanatomical differences – that some brain regions are simply smaller or less developed in psychopaths. Psychopaths are also believed to lack empathy from a young age; whatever is different about their brain, it was probably always that way. The same may be true of some intellectual disabilities, and perhaps other conditions. Restructuring the brain sounds hard, but it also seems like a task with high returns to intelligence. Perhaps there is some way to coax the adult brain into an earlier or more plastic state where it can be reshaped. I’m very uncertain how possible this is, but my instinct is to be optimistic about what AI can invent here. Effective genetic prevention of mental illness seems possible. Most mental illness is partially heritable, and genome-wide association studies are starting to gain traction on identifying the relevant factors, which are often many in number. It will probably be possible to prevent most of these diseases via embryo screening, similar to the story with physical disease. One difference is that psychiatric disease is more likely to be polygenic (many genes contribute), so due to complexity there’s an increased risk of unknowingly selecting against positive traits that are correlated with disease. Oddly however, in recent years GWAS studies seem to suggest that these correlations might have been overstated. In any case, AI-accelerated neuroscience may help us to figure these things out. Of course, embryo screening for complex traits raises a number of societal issues and will be controversial, though I would guess that most people would support screening for severe or debilitating mental illness. Everyday problems that we don’t think of as clinical disease will also be solved. Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy, some are fearful or anxious, or react badly to change. Today, drugs already exist to help with e.g. alertness or focus (caffeine, modafinil, ritalin) but as with many other previous areas, much more is likely to be possible. Probably many more such drugs exist and have not been discovered, and there may also be totally new modalities of intervention, such as targeted light stimulation (see optogenetics above) or magnetic fields. Given how many drugs we’ve developed in the 20th century that tune cognitive function and emotional state, I’m very optimistic about the “compressed 21st” where everyone can get their brain to behave a bit better and have a more fulfilling day-to-day experience. Human baseline experience can be much better. Taking one step further, many people have experienced extraordinary moments of revelation, creative inspiration, compassion, fulfillment, transcendence, love, beauty, or meditative peace. The character and frequency of these experiences differs greatly from person to person and within the same person at different times, and can also sometimes be triggered by various drugs (though often with side effects). All of this suggests that the “space of what is possible to experience” is very broad and that a larger fraction of people’s lives could consist of these extraordinary moments. It is probably also possible to improve various cognitive functions across the board. This is perhaps the neuroscience version of “biological freedom” or “extended lifespans”. One topic that often comes up in sci-fi depictions of AI, but that I intentionally haven’t discussed here, is “mind uploading”, the idea of capturing the pattern and dynamics of a human brain and instantiating them in software. This topic could be the subject of an essay all by itself, but suffice it to say that while I think uploading is almost certainly possible in principle, in practice it faces significant technological and societal challenges, even with powerful AI, that likely put it outside the 5-10 year window we are discussing. In summary, AI-accelerated neuroscience is likely to vastly improve treatments for, or even cure, most mental illness as well as greatly expand “cognitive and mental freedom” and human cognitive and emotional abilities. It will be every bit as radical as the improvements in physical health described in the previous section. Perhaps the world will not be visibly different on the outside, but the world as experienced by humans will be a much better and more humane place, as well as a place that offers greater opportunities for self-actualization. I also suspect that improved mental health will ameliorate a lot of other societal problems, including ones that seem political or economic. 3. Economic development and poverty The previous two sections are about developing new technologies that cure disease and improve the quality of human life. However an obvious question, from a humanitarian perspective, is: “will everyone have access to these technologies?” It is one thing to develop a cure for a disease, it is another thing to eradicate the disease from the world. More broadly, many existing health interventions have not yet been applied everywhere in the world, and for that matter the same is true of (non-health) technological improvements in general. Another way to say this is that living standards in many parts of the world are still desperately poor: GDP per capita is ~$2,000 in Sub-Saharan Africa as compared to ~$75,000 in the United States. If AI further increases economic growth and quality of life in the developed world, while doing little to help the developing world, we should view that as a terrible moral failure and a blemish on the genuine humanitarian victories in the previous two sections. Ideally, powerful AI should help the developing world catch up to the developed world, even as it revolutionizes the latter. I am not as confident that AI can address inequality and economic growth as I am that it can invent fundamental technologies, because technology has such obvious high returns to intelligence (including the ability to route around complexities and lack of data) whereas the economy involves a lot of constraints from humans, as well as a large dose of intrinsic complexity. I am somewhat skeptical that an AI could solve the famous “socialist calculation problem”23 and I don’t think governments will (or should) turn over their economic policy to such an entity, even if it could do so. There are also problems like how to convince people to take treatments that are effective but that they may be suspicious of. The challenges facing the developing world are made even more complicated by pervasive corruption in both private and public sectors. Corruption creates a vicious cycle: it exacerbates poverty, and poverty in turn breeds more corruption. AI-driven plans for economic development need to reckon with corruption, weak institutions, and other very human challenges. Nevertheless, I do see significant reasons for optimism. Diseases have been eradicated and many countries have gone from poor to rich, and it is clear that the decisions involved in these tasks exhibit high returns to intelligence (despite human constraints and complexity). Therefore, AI can likely do them better than they are currently being done. There may also be targeted interventions that get around the human constraints and that AI could focus on. More importantly though, we have to try. Both AI companies and developed world policymakers will need to do their part to ensure that the developing world is not left out; the moral imperative is too great. So in this section, I’ll continue to make the optimistic case, but keep in mind everywhere that success is not guaranteed and depends on our collective efforts.
2
Positive
true
c71fca4c-6812-41d3-886d-1f47dc3a638f
https://www.darioamodei.com/essay/machines-of-loving-grace
I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be. In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one. First, however, I wanted to briefly explain why I and Anthropic haven’t talked that much about powerful AI’s upsides, and why we’ll probably continue, overall, to talk a lot about risks. In particular, I’ve made this choice out of a desire to: Maximize leverage. The basic development of AI technology and many (not all) of its benefits seems inevitable (unless the risks derail everything) and is fundamentally driven by powerful market forces. On the other hand, the risks are not predetermined and our actions can greatly change their likelihood. Avoid perception of propaganda. AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides. I also think that as a matter of principle it’s bad for your soul to spend too much of your time “talking your book”. Avoid grandiosity. I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms. Avoid “sci-fi” baggage. Although I think most people underestimate the upside of powerful AI, the small community of people who do discuss radical AI futures often does so in an excessively “sci-fi” tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes). I think this causes people to take the claims less seriously, and to imbue them with a sort of unreality. To be clear, the issue isn’t whether the technologies described are possible or likely (the main essay discusses this in granular detail)—it’s more that the “vibe” connotatively smuggles in a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how various societal issues will play out, etc. The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people. Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires. Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead. Fear is one kind of motivator, but it’s not enough: we need hope as well. The list of positive applications of powerful AI is extremely long (and includes robotics, manufacturing, energy, and much more), but I’m going to focus on a small number of areas that seem to me to have the greatest potential to directly improve the quality of human life. The five categories I am most excited about are: Biology and physical health Neuroscience and mental health Economic development and poverty Peace and governance Work and meaning My predictions are going to be radical as judged by most standards (other than sci-fi “singularity” visions2), but I mean them earnestly and sincerely. Everything I’m saying could very easily be wrong (to repeat my point from above), but I’ve at least attempted to ground my views in a semi-analytical assessment of how much progress in various fields might speed up and what that might mean in practice. I am fortunate to have professional experience in both biology and neuroscience, and I am an informed amateur in the field of economic development, but I am sure I will get plenty of things wrong. One thing writing this essay has made me realize is that it would be valuable to bring together a group of domain experts (in biology, economics, international relations, and other areas) to write a much better and more informed version of what I’ve produced here. It’s probably best to view my efforts here as a starting prompt for that group. Basic assumptions and framework To make this whole essay more precise and grounded, it’s helpful to specify clearly what we mean by powerful AI (i.e. the threshold at which the 5-10 year clock starts counting), as well as laying out a framework for thinking about the effects of such AI once it’s present. What powerful AI (I dislike the term AGI)3 will look like, and when (or if) it will arrive, is a huge topic in itself. It’s one I’ve discussed publicly and could write a completely separate essay on (I probably will at some point). Obviously, many people are skeptical that powerful AI will be built soon and some are skeptical that it will ever be built at all. I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside, assume it will come reasonably soon, and focus on what happens in the 5-10 years after that. I also want to assume a definition of what such a system will look like, what its capabilities are and how it interacts, even though there is room for disagreement on this. By powerful AI, I have in mind an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties: In terms of pure intelligence4, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc. In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world. It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary. It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use. The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed5. It may however be limited by the response time of the physical world or of software it interacts with. Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks. We could summarize this as a “country of geniuses in a datacenter”. Clearly such an entity would be capable of solving very difficult problems, very fast, but it is not trivial to figure out how fast. Two “extreme” positions both seem false to me. First, you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust. Second, and conversely, you might believe that technological progress is saturated or rate-limited by real world data or by social factors, and that better-than-human intelligence will add very little6. This seems equally implausible to me—I can think of hundreds of scientific or even social problems where a large group of really smart people would drastically speed up progress, especially if they aren’t limited to analysis and can make things happen in the real world (which our postulated country of geniuses can, including by directing or assisting teams of humans). I think the truth is likely to be some messy admixture of these two extreme pictures, something that varies by task and field and is very subtle in its details. I believe we need new frameworks to think about these details in a productive way. Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one – for example, an air force needs both planes and pilots, and hiring more pilots doesn’t help much if you’re out of planes. I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI. My guess at a list of factors that limit or are complementary to intelligence includes: Speed of the outside world. Intelligent agents need to operate interactively in the world in order to accomplish things and also to learn8. But the world only moves so fast. Cells and animals run at a fixed speed so experiments on them take a certain amount of time which may be irreducible. The same is true of hardware, materials science, anything involving communicating with people, and even our existing software infrastructure. Furthermore, in science many experiments are often needed in sequence, each learning from or building on the last. All of this means that the speed at which a major project—for example developing a cancer cure—can be completed may have an irreducible minimum that cannot be decreased further even as intelligence continues to increase. Need for data. Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent—other than perhaps by speeding up the construction of a bigger accelerator. Intrinsic complexity. Some things are inherently unpredictable or chaotic and even the most powerful AI cannot predict or untangle them substantially better than a human or a computer today. For example, even incredibly powerful AI could predict only marginally further ahead in a chaotic system (such as the three-body problem) in the general case,9 as compared to today’s humans and computers. Constraints from humans. Many things cannot be done without breaking laws, harming humans, or messing up society. An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks). Many human societal structures are inefficient or even actively harmful, but are hard to change while respecting constraints like legal requirements on clinical trials, people’s willingness to change their habits, or the behavior of governments. Examples of advances that work well in a technical sense, but whose impact has been substantially reduced by regulations or misplaced fears, include nuclear power, supersonic flight, and even elevators. Physical laws. This is a starker version of the first point. There are certain physical laws that appear to be unbreakable. It’s not possible to travel faster than light. Pudding does not unstir. Chips can only have so many transistors per square centimeter before they become unreliable. Computation requires a certain minimum energy per bit erased, limiting the density of computation in the world. There is a further distinction based on timescales. Things that are hard constraints in the short run may become more malleable to intelligence in the long run. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn in vitro what used to require live animal experiments, or to build the tools needed to collect new data (e.g. the bigger particle accelerator), or to (within ethical limits) find ways around human-based constraints (e.g. helping to improve the clinical trial system, helping to create new jurisdictions where clinical trials have less bureaucracy, or improving the science itself to make human clinical trials less necessary or cheaper). Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10. The key question is how fast it all happens and in what order. With the above framework in mind, I’ll try to answer that question for the five areas mentioned in the introduction. 1. Biology and health Biology is probably the area where scientific progress has the greatest potential to directly and unambiguously improve the quality of human life. In the last century some of the most ancient human afflictions (such as smallpox) have finally been vanquished, but many more still remain, and defeating them would be an enormous humanitarian accomplishment. Beyond even curing disease, biological science can in principle improve the baseline quality of human health, by extending the healthy human lifespan, increasing control and freedom over our own biological processes, and addressing everyday problems that we currently think of as immutable parts of the human condition. In the “limiting factors” language of the previous section, the main challenges with directly applying intelligence to biology are data, the speed of the physical world, and intrinsic complexity (in fact, all three are related to each other). Human constraints also play a role at a later stage, when clinical trials are involved. Let’s take these one by one. Experiments on cells, animals, and even chemical processes are limited by the speed of the physical world: many biological protocols involve culturing bacteria or other cells, or simply waiting for chemical reactions to occur, and this can sometimes take days or even weeks, with no obvious way to speed it up. Animal experiments can take months (or more) and human experiments often take years (or even decades for long-term outcome studies). Somewhat related to this, data is often lacking—not so much in quantity, but quality: there is always a dearth of clear, unambiguous data that isolates a biological effect of interest from the other 10,000 confounding things that are going on, or that intervenes causally in a given process, or that directly measures some effect (as opposed to inferring its consequences in some indirect or noisy way). Even massive, quantitative molecular data, like the proteomics data that I collected while working on mass spectrometry techniques, is noisy and misses a lot (which types of cells were these proteins in? Which part of the cell? At what phase in the cell cycle?). In part responsible for these problems with data is intrinsic complexity: if you’ve ever seen a diagram showing the biochemistry of human metabolism, you’ll know that it’s very hard to isolate the effect of any part of this complex system, and even harder to intervene on the system in a precise or predictable way. And finally, beyond just the intrinsic time that it takes to run an experiment on humans, actual clinical trials involve a lot of bureaucracy and regulatory requirements that (in the opinion of many people, including me) add unnecessary additional time and delay progress. Given all this, many biologists have long been skeptical of the value of AI and “big data” more generally in biology. Historically, mathematicians, computer scientists, and physicists who have applied their skills to biology over the last 30 years have been quite successful, but have not had the truly transformative impact initially hoped for. Some of the skepticism has been reduced by major and revolutionary breakthroughs like AlphaFold (which has just deservedly won its creators the Nobel Prize in Chemistry) and AlphaProteo11, but there’s still a perception that AI is (and will continue to be) useful in only a limited set of circumstances. A common formulation is “AI can do a better job analyzing your data, but it can’t produce more data or improve the quality of the data. Garbage in, garbage out”. But I think that pessimistic perspective is thinking about AI in the wrong way. If our core hypothesis about AI progress is correct, then the right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run – as a Principal Investigator would to their graduate students), inventing new biological methods or measurement techniques, and so on. It is by speeding up the whole research process that AI can truly accelerate biology. I want to repeat this because it’s the most common misconception that comes up when I talk about AI’s ability to transform biology: I am not talking about AI as merely a tool to analyze data. In line with the definition of powerful AI at the beginning of this essay, I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do. To get more specific on where I think acceleration is likely to come from, a surprisingly large fraction of the progress in biology has come from a truly tiny number of discoveries, often related to broad measurement tools or techniques12 that allow precise but generalized or programmable intervention in biological systems. There’s perhaps ~1 of these major discoveries per year and collectively they arguably drive >50% of progress in biology. These discoveries are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control over biological processes. A few discoveries per decade have enabled both the bulk of our basic scientific understanding of biology, and have driven many of the most powerful medical treatments. Some examples include: CRISPR: a technique that allows live editing of any gene in living organisms (replacement of any arbitrary gene sequence with any other arbitrary sequence). Since the original technique was developed, there have been constant improvements to target specific cell types, increasing accuracy, and reducing edits of the wrong gene—all of which are needed for safe use in humans. Various kinds of microscopy for watching what is going on at a precise level: advanced light microscopes (with various kinds of fluorescent techniques, special optics, etc), electron microscopes, atomic force microscopes, etc. Genome sequencing and synthesis, which has dropped in cost by several orders of magnitude in the last couple decades. Optogenetic techniques that allow you to get a neuron to fire by shining a light on it. mRNA vaccines that, in principle, allow us to design a vaccine against anything and then quickly adapt it (mRNA vaccines of course became famous during COVID). Cell therapies such as CAR-T that allow immune cells to be taken out of the body and “reprogrammed” to attack, in principle, anything. Conceptual insights like the germ theory of disease or the realization of a link between the immune system and cancer13. I’m going to the trouble of listing all these technologies because I want to make a crucial claim about them: I think their rate of discovery could be increased by 10x or more if there were a lot more talented, creative researchers. Or, put another way, I think the returns to intelligence are high for these discoveries, and that everything else in biology and medicine mostly follows from them. Why do I think this? Because of the answers to some questions that we should get in the habit of asking when we’re trying to determine “returns to intelligence”. First, these discoveries are generally made by a tiny number of researchers, often the same people repeatedly, suggesting skill and not random search (the latter might suggest lengthy experiments are the limiting factor). Second, they often “could have been made” years earlier than they were: for example, CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They also are often delayed many years by lack of support from the scientific community for promising directions (see this profile on the inventor of mRNA vaccines; similar stories abound). Third, successful projects are often scrappy or were afterthoughts that people didn’t initially think were promising, rather than massively funded efforts. This suggests that it’s not just massive resource concentration that drives discoveries, but ingenuity. Finally, although some of these discoveries have “serial dependence” (you need to make discovery A first in order to have the tools or knowledge to make discovery B)—which again might create experimental delays—many, perhaps most, are independent, meaning many at once can be worked on in parallel. Both these facts, and my general experience as a biologist, strongly suggest to me that there are hundreds of these discoveries waiting to be made if scientists were smarter and better at making connections between the vast amount of biological knowledge humanity possesses (again consider the CRISPR example). The success of AlphaFold/AlphaProteo at solving important problems much more effectively than humans, despite decades of carefully designed physics modeling, provides a proof of principle (albeit with a narrow tool in a narrow domain) that should point the way forward. Thus, it’s my guess that powerful AI could at least 10x the rate of these discoveries, giving us the next 50-100 years of biological progress in 5-10 years.14 Why not 100x? Perhaps it is possible, but here both serial dependence and experiment times become important: getting 100 years of progress in 1 year requires a lot of things to go right the first time, including animal experiments and things like designing microscopes or expensive lab facilities. I’m actually open to the (perhaps absurd-sounding) idea that we could get 1000 years of progress in 5-10 years, but very skeptical that we can get 100 years in 1 year. Another way to put it is I think there’s an unavoidable constant delay: experiments and hardware design have a certain “latency” and need to be iterated upon a certain “irreducible” number of times in order to learn things that can’t be deduced logically. But massive parallelism may be possible on top of that15. What about clinical trials? Although there is a lot of bureaucracy and slowdown associated with them, the truth is that a lot (though by no means all!) of their slowness ultimately derives from the need to rigorously evaluate drugs that barely work or ambiguously work. This is sadly true of most therapies today: the average cancer drug increases survival by a few months while having significant side effects that need to be carefully measured (there’s a similar story for Alzheimer’s drugs). This leads to huge studies (in order to achieve statistical power) and difficult tradeoffs which regulatory agencies generally aren’t great at making, again because of bureaucracy and the complexity of competing interests. When something works really well, it goes much faster: there’s an accelerated approval track and the ease of approval is much greater when effect sizes are larger. mRNA vaccines for COVID were approved in 9 months—much faster than the usual pace. That said, even under these conditions clinical trials are still too slow—mRNA vaccines arguably should have been approved in ~2 months. But these kinds of delays (~1 year end-to-end for a drug) combined with massive parallelization and the need for some but not too much iteration (“a few tries”) are very compatible with radical transformation in 5-10 years. Even more optimistically, it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans. This will be particularly important in developing drugs against the aging process, which plays out over decades and where we need a faster iteration loop. Finally, on the topic of clinical trials and societal barriers, it is worth pointing out explicitly that in some ways biomedical innovations have an unusually strong track record of being successfully deployed, in contrast to some other technologies16. As mentioned in the introduction, many technologies are hampered by societal factors despite working well technically. This might suggest a pessimistic perspective on what AI can accomplish. But biomedicine is unique in that although the process of developing drugs is overly cumbersome, once developed they generally are successfully deployed and used.
2
Positive
true
<urn:uuid:2d7122b4-407e-48ce-a9b1-3ad9d137b268>
dclm-dedup-25B-ai-scifi-docs | https://prod-risk-bb8.incinsight.com/asset-management/hedge-funds/2470319/hal-pay-where-hedge-funds-think-ai-can-really-work
HAL to pay: where hedge funds think AI can really work Some funds using artificial intelligence already; others see obstacles to its success Science fiction's HAL 9000: tomorrow's hedge fund manager? Artificial intelligence (AI) is transforming robotics, voice recognition, translation software and the promise of self-driving cars. It helps Netflix recommend films and Google DeepMind beat a h
2
Positive
true
bc6cd278-aed1-4db4-8ea8-184cb49476ed
https://www.prnewswire.com/news-releases/democratising-ai-for-good-global-ai-pioneer-dr-ben-goertzel-launches-1m-in-grants-for-international-developers-advancing-the-mission-of-creating-benevolent-artificial-general-intelligence-agi-302304470.html
Dr. Ben Goertzel, the CEO of the Artificial Superintelligence (ASI) Alliance, and the Founder of SingularityNET - the world's first decentralised AI platform is on a mission to accelerate the emergence of human-level Artificial General Intelligence (AGI) and superintelligence for the benefit of all mankind SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralised, democratic, inclusive and beneficial Artificial General Intelligence (AGI). According to Dr. Goertzel, AGI should be independent of any central entity, open to anyone and not restricted to the narrow goals of a single corporation or even a single country. The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. The core platform and AI teams are further complemented by specialised teams devoted to application areas such as finance, robotics, biomedical AI, media, arts and entertainment.
2
Positive
true
18053423-6b41-44c2-b105-a288ff7c3412
https://blog.samaltman.com/the-gentle-singularity
We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be. Robots are not yet walking the streets, nor are most of us talking to AI all day. People still die of disease, we still can’t easily go to space, and there is a lot about the universe we don’t understand. And yet, we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them. The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far. AI will contribute to the world in many ways, but the gains to quality of life from AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present. Scientific progress is the biggest driver of overall progress; it’s hugely exciting to think about how much more we could have. In some big sense, ChatGPT is already more powerful than any human who has ever lived. Hundreds of millions of people rely on it every day and for increasingly important tasks; a small new capability can create a hugely positive impact; a small misalignment multiplied by hundreds of millions of people can cause a great deal of negative impact. 2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same. 2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world. A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools. Generally speaking, the ability for one person to get much more done in 2030 than they could in 2020 will be a striking change, and one many people will figure out how to benefit from. In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes. But in still-very-important-ways, the 2030s are likely going to be wildly different from any time that has come before. We do not know how far beyond human-level intelligence we can go, but we are about to find out. In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else. Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes. We already hear from scientists that they are two or three times more productive than they were before AI. Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research. We may be able to discover new computing substrates, better algorithms, and who knows what else. If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different. From here on, the tools we have already built will help us find further scientific insights and aid us in creating better AI systems. Of course this isn’t the same thing as an AI system completely autonomously updating its own code, but nevertheless this is a larval version of recursive self-improvement. There are other self-reinforcing loops at play. The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems. And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off. If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different. As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.) The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big. If history is any guide, we will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the industrial revolution is a good recent example). Expectations will go up, but capabilities will go up equally quickly, and we’ll all get better stuff. We will build ever-more-wonderful things for each other. People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don’t care very much about machines. A subsistence farmer from a thousand years ago would look at what many of us do and say we have fake jobs, and think that we are just playing games to entertain ourselves since we have plenty of food and unimaginable luxuries. I hope we will look at the jobs a thousand years in the future and think they are very fake jobs, and I have no doubt they will feel incredibly important and satisfying to the people doing them. The rate of new wonders being achieved will be immense. It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year. Many people will choose to live their lives in much the same way, but at least some people will probably decide to “plug in”. Looking forward, this sounds hard to wrap our heads around. But probably living through it will feel impressive but manageable. From a relativistic perspective, the singularity happens bit by bit, and the merge happens slowly. We are climbing the long arc of exponential technological progress; it always looks vertical looking forward and flat going backwards, but it’s one smooth curve. (Think back to 2020, and what it would have sounded like to have something close to AGI by 2025, versus what the last 5 years have actually been like.) There are serious challenges to confront along with the huge upsides. We do need to solve the safety issues, technically and societally, but then it’s critically important to widely distribute access to superintelligence given the economic implications. The best path forward might be something like: Solve the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term (social media feeds are an example of misaligned AI; the algorithms that power those are incredible at getting you to keep scrolling and clearly understand your short-term preferences, but they do so by exploiting something in your brain that overrides your long-term preference). Then focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country. Society is resilient, creative, and adapts quickly. If we can harness the collective will and wisdom of people, then although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly and be able to use this technology to get maximum upside and minimal downside. Giving users a lot of freedom, within broad bounds society has to decide on, seems very important. The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better. We (the whole industry, not just OpenAI) are building a brain for the world. It will be extremely personalized and easy for everyone to use; we will be limited by good ideas. For a long time, technical people in the startup industry have made fun of “the idea guys”; people who had an idea and were looking for a team to build it. It now looks to me like they are about to have their day in the sun. OpenAI is a lot of things now, but before anything else, we are a superintelligence research company. We have a lot of work in front of us, but most of the path in front of us is now lit, and the dark areas are receding fast. We feel extraordinarily grateful to get to do what we do. Intelligence too cheap to meter is well within grasp. This may sound crazy to say, but if we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.
2
Positive
true
be37fb8f-fe07-4cb4-8e4e-2d7ea449817a
https://www.meta.com/superintelligence/
Personal Superintelligence Over the last few months we have begun to see glimpses of our AI systems improving themselves. The improvement is slow for now, but undeniable. Developing superintelligence is now in sight. It seems clear that in the coming years, AI will improve all our existing systems and enable the creation and discovery of new things that aren't imaginable today. But it is an open question what we will direct superintelligence towards. In some ways this will be a new era for humanity, but in others it's just a continuation of historical trends. As recently as 200 years ago, 90% of people were farmers growing food to survive. Advances in technology have steadily freed much of humanity to focus less on subsistence and more on the pursuits we choose. At each step, people have used our newfound productivity to achieve more than was previously possible, pushing the frontiers of science and health, as well as spending more time on creativity, culture, relationships, and enjoying life. I am extremely optimistic that superintelligence will help humanity accelerate our pace of progress. But perhaps even more important is that superintelligence has the potential to begin a new era of personal empowerment where people will have greater agency to improve the world in the directions they choose. As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be. Meta's vision is to bring personal superintelligence to everyone. We believe in putting this power in people's hands to direct it towards what they value in their own lives. This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output. At Meta, we believe that people pursuing their individual aspirations is how we have always made progress expanding prosperity, science, health, and culture. This will be increasingly important in the future as well. The intersection of technology and how people live is Meta's focus, and this will only become more important in the future. If trends continue, then you'd expect people to spend less time in productivity software, and more time creating and connecting. Personal superintelligence that knows us deeply, understands our goals, and can help us achieve them will be by far the most useful. Personal devices like glasses that understand our context because they can see what we see, hear what we hear, and interact with us throughout the day will become our primary computing devices. We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible. The rest of this decade seems likely to be the decisive period for determining the path this technology will take, and whether superintelligence will be a tool for personal empowerment or a force focused on replacing large swaths of society. Meta believes strongly in building personal superintelligence that empowers everyone. We have the resources and the expertise to build the massive infrastructure required, and the capability and will to deliver new technology to billions of people across our products. I'm excited to focus Meta's efforts towards building this future. – Mark
2
Positive
true