- Регистрация
- 27 Авг 2018
- Сообщения
- 37,406
- Реакции
- 528,751
- Тема Автор Вы автор данного материала? |
- #1
Artificial Intelligence (AI) is widely used in society today. The (mis)use of biased data sets in machine learning applications is well‑known, resulting in discrimination and exclusion of citizens. Another example is the use of non‑transparent algorithms that can’t explain themselves to users, resulting in the AI not being trusted and therefore not being used when it might be beneficial to use it.
Responsible Use of AI in Military Systems lays out what is required to develop and use AI in military systems in a responsible manner. Current developments in the emerging field of Responsible AI as applied to military systems in general (not merely weapons systems) are discussed. The book takes a broad and transdisciplinary scope by including contributions from the fields of philosophy, law, human factors, AI, systems engineering, and policy development.
Divided into five sections, Section I covers various practical models and approaches to implementing military AI responsibly; Section II focuses on liability and accountability of individuals and states; Section III deals with human control in human‑AI military teams; Section IV addresses policy aspects such as multilateral security negotiations; and Section V focuses on ‘autonomy’ and ‘meaningful human control’ in weapons systems.
Key Features:
- Takes a broad transdisciplinary approach to responsible AI
- Examines military systems in the broad sense of the word
- Focuses on the practical development and use of responsible AI
- Presents a coherent set of chapters, as all authors spent two days discussing each other’s work
INFORMATION PAGE:
DOWNLOAD: