AI making mistakes is not scary, introspecting and correcting it
Release time:
2018-10-31
Source:
"Ai is also a mirror of humanity. Those unsatisfactory places often reflect the imperfection of human nature, and also provide an opportunity for human beings to reflect on themselves at a deeper level. "The Amazon case shows that gender discrimination is still a serious problem even in Western societies that have advocated equality for men and women for centuries."
"Ai is also a mirror of humanity. Those unsatisfactory places often reflect the imperfection of human nature, and also provide an opportunity for human beings to reflect on themselves at a deeper level. "The Amazon case shows that gender discrimination is still a serious problem even in Western societies that have advocated equality for men and women for centuries."
Recently, Amazon recruitment software was found to have a "preference for boys" tendency, giving female job applicants low scores. This has refreshed people's more objective and scientific view of artificial intelligence. Amazon's research team originally envisioned feeding the recruiting software 100 resumes, it would spit out the top five, and then the company would hire them first, which would greatly ease the pressure on HR departments.
Now many large companies may not have the ambition of Amazon to solve the problem of talent selection as a "package", but in the recruitment process, it is normal to use artificial intelligence to screen resumes and recommend positions. In the eyes of many recruiters, AI can reduce the influence of subjective opinions of recruiters. But Amazon's failure holds a lesson for a growing number of large companies that are looking to automate their hiring processes. At the same time, it also prompts people to rethink the problem of algorithmic fairness at the level of technical ethics. Obviously, in Amazon's case, artificial intelligence not only inherited the biases of human society, but also "purified" this bias, making it more "precise" and "direct."
More importantly, after the algorithm "washing the ground" of human bias, it has put on the cloak of "science and technology" that seems objective and fair. In fact, a system trained on biased data is necessarily biased. For example, it was previously revealed that Microsoft's chatbot Tay quickly learned foul-mouthed and racist extreme remarks in the mix with netizens, cursing feminists and Jews on Twitter. This reminds us that the data processed by the algorithm has the characteristics of human society, and the development of artificial intelligence or the use of algorithms must have "people", that is, to fully consider the limitations of human beings as artificial intelligence developers, and the "heredity" of this limitation in artificial intelligence.
It is said that children are the mirrors of their parents. Artificial intelligence is also a mirror of humanity. Those unsatisfactory places often reflect the imperfection of human nature, and also provide an opportunity for human beings to reflect on themselves at a deeper level. This case of Amazon makes people realize that even in the Western society that has advocated equality between men and women for hundreds of years, gender discrimination is still so serious.
Ai cannot be used only in a vacuum out of the real human context, otherwise AI will lose its meaning. And in the foreseeable future, pushing humanity to perfection is just a beautiful dream. Then, how to overcome these problems in artificial intelligence self-learning, so that imperfect humans can create near-perfect artificial intelligence, has become an important topic that must be overcome.
In fact, to create perfection from imperfection is the genius of the human species. The realization of all this is not based on anti-intellectual fantasies, but on continuous patching and repair, and overcoming the loopholes caused by scientific and technological progress. A 2017 report revealed that searching for "doctor" on Google Image yielded mostly white male results. This actually reflects the social reality that doctors are always connected to men and nurses are always connected to women. Later, this problem is basically solved by modifying the algorithm.
Furthermore, the timely discovery and correction of AI vulnerabilities depend on the further regulation of science and technology policies and mechanisms. In this regard, advocating the transparency and explainability of algorithms within a certain limit is a feasible solution, and the establishment of an artificial intelligence error accountability mechanism should also be accelerated. For example, Japan has positioned AI medical devices as equipment to assist doctors in diagnosis, and stipulated that AI medical devices make mistakes, and doctors take responsibility. Whether this is entirely reasonable is worth exploring. However, increasing the awareness of algorithmic accountability and incorporating it into AI development planning and top-level design is a necessary move for the long-term development of AI and the benefit of people.
Previous
Previous
latest news