Cybersecurity company Home Security Heroes published a study on artificial intelligence and password cracking. Specifically, the researchers studied a new artificial intelligence password cracking tool called PassGAN (Password Generating Adversarial Network). In this study, researchers used PassGAN to run a list of more than 15 million passwords. The results show that the commonly used passwords of 51% can be cracked in one minute, 65% can be cracked in one hour, 71% can be cracked in one day, and 81% can be cracked in one month. In addition, the team also presented the results of the experiment in tabular form, showing that almost every password containing six characters or less was immediately cracked. According to the organization, passwords longer than 18 characters are considered safe for tools like PassGAN. According to the table, it would take the tool at least 10 months to calculate an 18-character password that uses only numbers. (Source: Home Security Heroes)
On April 6th, it was revealed by Andrew Bosworth, the Chief Technology Officer of Meta, that the company's CEO, Mark Zuckerberg, now spends most of his time on AI. Bosworth also stated that suggestions, such as Elon Musk's call for a pause in AI development, are "impractical." Meta plans to commercialize their own developed Generative AI technology before December of this year and explore its practical applications in collaboration with Google. Since 2013, Meta has been dedicated to research in the field of artificial intelligence, with the number of research publications comparable to Google. In February of this year, Meta announced the establishment of a new team to develop AIGC technology. Now, they have revealed the timeline for the commercialization of this technology for the first time.
According to news on April 4, Cambridge University Press’s new artificial intelligence ethics policy prohibits artificial intelligence from being regarded as the author of academic papers and books published by it. The publisher said the guidelines are intended to uphold academic standards as tools like ChatGPT can lead to issues such as plagiarism, originality and accuracy. "It is clear that tools like ChatGPT cannot and should not be considered authors," said Mandy Hill, general manager of scholarly publishing at Cambridge University Press. The newly announced principles state that authors are responsible for the originality, completeness and accuracy of their work, and that the use of artificial intelligence must also be described in research papers, just as methodologies, software and tools have been clearly stated in the paper. Moreover, any use of artificial intelligence must not violate the publisher’s relevant regulations on academic plagiarism, which means that academic works must be the author’s own and must not present other people’s ideas, materials, or information without “full citation and reference.” text or other material. (Source: AI business)
According to news on April 3, the Writers Guild of America stated that AI-assisted script writing can be used without affecting the author's credit, and studios can also assign AI-generated scripts to writers for editing or rewriting. The association said that AI is playing an increasing role in writing, but it does not want to use artificial intelligence to generate "source materials", which refers to the works that inspired the creation of scripts, such as magazine articles, novels or plays. With the source material, the AI gets "screenwriter" credit, and ChatGPT's inability to create the "original material" designation means that the author will get full "author" credit, with the AI just being a tool.
On April 3rd, according to a recent research report published by cybersecurity company Darktrace, attackers are utilizing generative AI tools like ChatGPT to increase the volume of phishing email attacks by 135%. Attackers achieve this by employing techniques such as adding text descriptions, punctuation marks, and longer sentences. The report highlights that 30% of employees worldwide have fallen victim to fraudulent emails or text messages in the past. Furthermore, 87% of individuals are concerned that the personal information they provide online could be used for phishing and other email scams. Over the past six months, the frequency of fraudulent emails and text messages has increased by 70%, and 79% of companies' spam filters mistakenly block important legitimate emails from reaching their inboxes.
Geoffrey Hinton, a renowned computer scientist known as the "Godfather of AI," stated in an interview with CBS on March 26th that the development of general artificial intelligence (AGI) is progressing faster than people imagine. AGI refers to AI systems capable of performing human-like cognitive tasks and activities. Hinton initially believed it would take 20 to 50 years to achieve AGI, but now he suggests it may happen in less than 20 years. However, it's important to note that AGI development and its timeline are still subjects of ongoing research and exploration in the field of computer science.
Members of the Legislative Council are concerned about whether the authorities will follow the example of mainland China and include the study of artificial intelligence-related knowledge in formal courses. Secretary for Education Cai Ruolin said that currently, Hong Kong’s junior high school curriculum involves learning elements involving artificial intelligence, and the elective ICT courses (Information and Communications Technology) in high school curricula also cover related knowledge. As for whether it will consider setting up independent subjects, the authorities will consider various aspects. Development in this area was discussed at the Curriculum Development Council. Election Committee member Wong Kam-fai asked during the oral question session of the Legislative Council whether the Education Bureau would consider adding elements of teaching the use of mainland applications when designing the content of innovative technology courses. Cai Ruolian responded that the current information technology-related courses in middle schools already involve the use of mainland applications and platforms such as WeChat and Baidu, and the authorities will not require schools to use relevant programs. Deng Fei, a member of the Election Committee, is concerned that artificial intelligence has had a huge impact on technological development in recent years. There are already elective subjects in the mainland that specifically teach artificial intelligence knowledge. He asked whether the authorities had plans to add similar subjects to the local curriculum. Chua Ruolin pointed out that Hong Kong’s junior high schools have special units to teach the development of artificial intelligence in relevant subjects, and high schools also have elective subjects covering related content. As for whether it is necessary to add relevant elective subjects as technology develops and keep pace with the times, she said the authorities will consider various factors. , including student needs, etc., and then discussed by the Curriculum Development Council. Source: Hong Kong Economic Journal
The AI chatbot ChatGPT has caused a global craze. This newspaper found that some primary and secondary schools in Hong Kong have also begun to "embrace ChatGPT" and use it to ask questions, make announcements, etc. Tin Shui Wai Chinese YMCA Primary School also uses ChatGPT to further develop an AI teaching platform to help teachers complete correcting essays faster and improve students' English pronunciation. Principal Cheng Zhixiang said that he hopes to use the new platform to stimulate the education community to think about the use of new technologies. He emphasized that "AI and the position of teachers have no conflict." He believes that the two can work together and even have the opportunity to let AI directly grade essays in the future. Source:hket