Rephrase and rearrange the whole content into a news article. I want you to respond only in language English. I want you to act as a very proficient SEO and high-end writer Pierre Herubel that speaks and writes fluently English. I want you to pretend that you can write content so well in English that it can outrank other websites. Make sure there is zero plagiarism.: Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nearly 200 AI leaders, researchers and data scientists have signed an open letter published last Tuesday by the ‘Responsible AI Community,’ which, in addition to condemning Israel’s “latest violence against the Palestinian people in Gaza and the West Bank,” says “we also condemn the use of AI-driven technologies for warmaking, in which the aim is to make the loss of human life more efficient, and the instances in which anti-Palestinian biases are perpetuated throughout AI-enabled systems.” The letter, which appears to first have been circulated by Tina Park, head of inclusive research and Design at the Partnership on AI, calls for a withdrawal of technology support to the Israeli government and an end to defense contracts with the Israeli government and military. It also says that “history did not start on October 7, 2023, but the current crisis reflects the horrific scale and extent of violence enabled by the use of AI-driven technologies.” The Israeli government’s use of AI-driven technology, the letter continues, has “led to strikes against over 11,000 targets in Gaza since the latest conflict started on October 7, 2023.” The letter’s signers include Timnit Gebru, AI ethics researcher and founder of DAIR (The Distributed AI Research Institute); Alex Hanna, director of research at DAIR; Abeba Birhane, senior fellow of trustworthy AI at the Mozilla Foundation; Emily Bender, professor of linguistics at the University of Washington; and Sarah Myers West, managing director of the AI Now Institute. Israeli and Jewish AI leaders have pushed back on the letter Israeli and Jewish AI leaders, some of whom are also part of the AI ethics/responsible AI community, have pushed back on the letter, saying it is one more in a series of examples of prominent AI ethicists either “applauding” the Hamas attacks against Israel on October 7 or being silent about them. The open letter does not mention the Israeli hostages being held in Gaza, nor does it condemn Hamas for the October 7 attacks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Learn More Jules Polonetsky, CEO of the Future of Privacy Forum, said it is “deeply distressing to me that this letter doesn’t devote a single word to condemning the massacre committed by Hamas.” In addition, he said, “there are absolutely complicated moral issues to weigh when technology is used in military conflict,” but a “one-sided broadside like this unfortunately does little to shed light on the path to ending bloodshed.” Yoav Goldberg, a professor at Bar Ilan University and research director at AI2-Israel, also emphasized that many of the “AI systems” or “surveillance systems” described in the letter have “likely saved countless Palestinian lives.” For example, he said he was certain AI technologies — or AI-human collaborative interfaces — are used to try and track the hostages. “Finding hostages will make things conclude faster, saving lives,” he said. AI is also used to navigate missiles, he added, making them more likely to hit their targets and not random people, while AI is used to surface targets. “It means the IDF can target legitimate military targets and not random ones,” he explained. Finally, he pointed out that before October 7, “many attacks, probably on a smaller scale, were prevented, presumably also due to use of systems that involve ‘AI,’ especially around intelligence.” Prevented attacks, he explained, also prevented Israeli retaliations: “Consider an hypothetical system that would have been able to notify/warn about October 7 in advance — we wouldn’t have been in this war now,” he said. And Shira Eisenberg, an AI engineer currently based in the Washington DC area who is also a member of the scientific council for The Israeli Association for Ethics in Artificial Intelligence, added that “AI is a critical wartime technology and is being used to translate intercepted Russian messages in the war in Ukraine, as well.” She agreed that Israel must be responsible in its use of AI, “but to rule out wartime use is to jeopardize the safety of many.” She pointed to systems like Israel’s AI-powered Iron Dome, which regularly intercepts missiles launched from Gaza into Israel. In addition, several of those Israeli and Jewish AI leaders say they have felt shocked, pained, and disappointed by comments on social media since October 7 that they consider to be anti-semitic, one-sided, and highly insensitive, posted by some of the same AI researchers and industry leaders that signed the open letter. “In the first weeks I, and many others in Israel, especially in academia and in Israeli left-wing circles, was completely shocked with this,” said Goldberg. “We felt betrayed. We felt confused. We felt alone. How can people who we know, who we thought we share values with, can show such weak moral judgment? How can they be one sided? How can they be so shallow? But now… I am not surprised anymore. I am not outraged. I am just very very sad and very very disappointed.” Eran Toch, associate professor at Tel Aviv University, whose research focuses on the boundary between humans and computers, focusing on usable privacy and security, machine learning, and online safety, added that “I think it makes Israeli members of the critical AI community feel very alone.” In Israel, as well as elsewhere, he said, “people of this community are more political and are more progressive than others,” he said. “The fact that people we thought were part of our community have shown zero empathy and curiosity about our experience stings bitterly.” Many Israelis in AI ethics, he said, “try to fight for a resolution to the conflict with the Palestinians and human rights, including digital ones for both Palestinians and Israelis. It’s tough to do that with no external allies.” Beyond politics, he added, “many fear, and I experienced discrimination in professional circles. People review our papers as if it’s a social media thread battle.” In addition, he said that he is “particularly concerned” with what he said is a “conspiracy theory that is propagated in the anti-Israel letters written by members of the community.” The idea, he explained, is that Israel “is the epicenter of inhuman AI technology, used against Palestinians and then against other people in the Global South—this theory repeats centuries-old anti-semitic tropes that connect Jews, technology, and oppression.” While he agreed that Israel is a technological hub, and that he is critical himself, the technologies are “no different from the ones produced in Silicon Valley or London.” Creating the idea of Israel as the mastermind of AI “is a conspiracy theory that I already see propagated in deep anti-semitic circles. I think these letters are dangerous.” VentureBeat reached out to Gebru and Hanna for comment. While Hanna would not respond on the record, Gebru responded by saying “I am not going to engage. Enough is enough. We are seeing things with our own eyes and students (mostly of color) are getting incessantly harassed and doxxed. I am going to focus on my support of Palestinians.” Some see these issues causing a fracture, or schism, within the AI community, and the tech community at large, that has been growing steadily since October 7. Last month, for example, several AI leaders withdrew from the Web Summit in Lisbon, Europe’s premier technology conference. That decision came in response to the event’s founder and CEO, Paddy Cosgrave, calling Israel’s actions in response to Hamas’ October 7 surprise terror attack “war crimes.” “I definitely see this as a schism in the AI community, and the tech community at large,” said Eisenberg. “VC has also been affected, with prominent figures coming out on either side of the issue. I wouldn’t say this is super problematic for the AI community — it’s not as divisive as the accelerationist / decelerationist line, but it is a major fracture.” Dan Kotliar, a researcher at the University of…