Rephrase and rearrange the whole content into a news article. I want you to respond only in language English. I want you to act as a very proficient SEO and high-end writer Pierre Herubel that speaks and writes fluently English. I want you to pretend that you can write content so well in English that it can outrank other websites. Make sure there is zero plagiarism.: It’s here: months after it was first announced, Nightshade, a new, free software tool allowing artists to “poison” AI models seeking to train on their works, is now available for artists to download and use on any artworks they see fit. Developed by computer scientists on the Glaze Project at the University of Chicago under Professor Ben Zhao, the tool essentially works by turning AI against AI. It makes use of the popular open-source machine learning framework PyTorch to identify what’s in a given image, then applies a tag that subtly alters the image at the pixel level so other AI programs see something totally different than what’s actually there. Today is the day. Nightshade v1.0 is ready. Performance tuning is done, UI fixes are done.You can download Nightshade v1.0 fromhttps://t.co/knwLJSRrRhPlease read the what-is page and also the User’s Guide on how to run Nightshade. It is a bit more involved than Glaze— Glaze at UChicago (@TheGlazeProject) January 19, 2024 It’s the second such tool from the team: nearly one year ago, the team unveiled Glaze, a separate program designed to alter digital artwork at a user’s behest to confuse AI training algorithms into thinking the image has a different style than what is actually present (such as different colors and brush strokes than are really there). But whereas the Chicago team designed Glaze to be a defensive tool — and still recommends artists use it in addition to Nightshade to prevent an artist’s style from being imitated by AI models — Nightshade is designed to be “an offensive tool.” An AI model that ended up training on many images altered or “shaded” with Nightshade would likely erroneously categorize objects going forward for all users of that model. “For example, human eyes might see a shaded image of a cow in a green field largely unchanged, but an AI model might see a large leather purse lying in the grass,” the team further explains. Therefore, an AI model trained on images of a cow shaded to look like a purse would start to generate a purses instead of cows, even when the user asked for the model to make a picture of a cow. Requirements and how Nightshade works Artists seeking to use Nightshade must have a Mac with Apple chips inside (M1, M2 or M3) or a PC running Windows 10 or 11. The tool can be downloaded for both OSes here. The Windows file also is capable of running on a PC’s GPU, provided it is one from Nvidia on this list of supported hardware. Some users have also reported long download times due to the overwhelming demand for the tool — as long as eight hours in some cases (the two versions are 255MB and 2.6GB in size for Mac and PC, respectively. Screenshot of comment on Glaze/Nightshade Project Instagram account. Credit: VentureBeat Users must also agree to the Glaze/Nightshade team’s end-user license agreement (EULA), which stipulates they use the tool on machines under their control and don’t modify the underlying source code, nor “Reproduce, copy, distribute, resell or otherwise use the Software for any commercial purpose.” Nightshade v1.0 “transforms images into ‘poison’ samples, so that [AI] models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space,” states a blog post from the development team on its website. That is, by using Nightshade v 1.0 to “shade” an image, the image will be transformed into a new version thanks to open-source AI libraries — ideally subtly enough so that it doesn’t look much different to the human eye, but that it appears to contain totally different subjects to any AI models training on it. In addition, the tool is resilient to most of the typical transformations and alterations a user or viewer might make to an image. As the team explains: “You can crop it, resample it, compress it, smooth out pixels, or add noise, and the effects of the poison will remain. You can take screenshots, or even photos of an image displayed on a monitor, and the shade effects remain. Again, this is because it is not a watermark or hidden message (steganography), and it is not brittle.” Applause and condemnation While some artists have rushed to download Nightshade v1.0 and are already making use of it — among them, Kelly McKernan, one of the former lead artist plaintiffs in the ongoing class-action copyright infringement lawsuit against AI art and video generator companies Midjourney, DeviantArt, Runway, and Stability AI — some web users have complained about it, suggesting it is tantamount to a cyberattack on AI models and companies. (VentureBeat uses Midjourney and other AI image generators to create article header artwork.) I’m terribly excited to share that “Artifact” has been Glazed and Nightshaded by @TheGlazeProject and what a perfect piece for it as well. This is a painting about generative AI cannibalizing the authentic voice of human creatives. When this image is scraped for training, well… pic.twitter.com/0VNFIyabc2— Kelly McKernan (@Kelly_McKernan) January 14, 2024 Ahahah the cope is insane. Dude is legit arguing against glazing your images because it’s ‘illegal’ in his eyes.He compared it to having his PC hacked because it ‘disrupts his operation’ I am delighted pic.twitter.com/BhMP73BkUb— Jade (@jadel4w) January 19, 2024 The Glaze/Nightshade team, for its part, denies it is seeking destructive ends, writing:”Nightshade’s goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.” In other words, the creators are seeking to make it so that AI model developers must pay artists to train on data from them that is uncorrupted. The latest front in the fast-moving fight over data scraping How did we get here? It all comes down to how AI image generators have been trained: by scraping data from across the web, including scraping original artworks posted by artists who had no prior express knowledge nor decision-making power about this practice, and say the resulting AI models trained on their works threatens their livelihood by competing with them. As VentureBeat has reported, data scraping involves letting simple programs called “bots” scour the internet and copy and transform data from public facing websites into other formats that are helpful to the person or entity doing the scraping. It’s been a common practice on the internet and used frequently prior to the advent of generative AI, and is roughly the same technique used by Google and Bing to crawl and index websites in search results. But it has come under new scrutiny from artists, authors, and creatives who object to their work being used without their express permission to train commercial AI models that may compete with or replace their work product. AI model makers defend the practice as not only necessary to train their creations, but as lawful under “fair use,” the legal doctrine in the U.S. that states prior work may be used in new work if it is transformed and used for a new purpose. Though AI companies such as OpenAI have introduced “opt-out” code that objectors can add to their websites to avoid being scraped for AI training, the Glaze/Nightshade team notes that “Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.” Nightshade, then, was conceived and designed as a tool to “address this power asymmetry.” The team further explains their end goal: “Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization.” Basically: make widespread data scraping more…