語系/ Language:
繁體中文
English
KMU OLIS
登入
回首頁
到查詢結果
[ subject:"Social values." ]
切換:
標籤
|
MARC模式
|
ISBD
The alignment problem: machine learn...
~
Christian, Brian,
The alignment problem: machine learning and human values /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
The alignment problem: / edited by Brian Christian.
其他題名:
machine learning and human values /
作者:
Christian, Brian,
出版者:
New York, NY: W.W. Norton & Company, : 2020,
版本:
1st ed.
面頁冊數:
xii, 476 p.; 25cm;
提要註:
"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--
標題:
Social values. -
ISBN:
9780393635829(hard):
The alignment problem: machine learning and human values /
Christian, Brian,
The alignment problem:
machine learning and human values /edited by Brian Christian. - 1st ed. - New York, NY: W.W. Norton & Company, 2020 - xii, 476 p.; 25cm
Includes bibliographical references (p. [401]-451) and index.
"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--
ISBN: 9780393635829(hard): NT588
LCCN: 2020029036Subjects--Topical Terms:
268819
Social values.
The alignment problem: machine learning and human values /
LDR
:01584cam 2200169 a 4500
001
346345
008
211025s2020 nyu b 001 0 eng d
010
$a
2020029036
020
$a
9780393635829(hard):
$c
NT588
040
$a
KMU
041
0 #
$a
eng
087
# 4
$a
198.312
$b
C555
100
1
$a
Christian, Brian,
$e
author
$3
478208
245
1 4
$a
The alignment problem:
$b
machine learning and human values /
$c
edited by Brian Christian.
250
$a
1st ed.
260
#
$a
New York, NY:
$b
W.W. Norton & Company,
$c
2020
300
$a
xii, 476 p.;
$c
25cm
504
$a
Includes bibliographical references (p. [401]-451) and index.
520
#
$a
"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--
$c
Provided by publisher.
650
# 0
$2
96060
$a
Social values.
$3
268819
650
# 0
$a
Software failures.
$3
478207
650
# 0
$a
Machine learning
$x
Safety measures.
$3
478206
650
# 0
$a
Artificial intelligence
$x
Social aspects.
$3
478205
650
# 0
$a
Artificial intelligence
$x
Moral and ethical aspects.
$3
478187
筆 0 讀者評論
評論
新增評論
分享你的心得,請勿在此評論區張貼涉及人身攻擊、情緒謾罵、或內容涉及非法的不當言論,館方有權利刪除任何違反評論規則之發言,情節嚴重者一律停權,以維護所有讀者的自由言論空間。
Export
取書館別
處理中
...
變更密碼
登入