2. I PROPOSE to consider the question, 'Can machines think?'
I PROPOSE to consider the question, 'Can machines think?'
-- Turing
3. Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces.
You have only K at K6 and R at R1. It is your move.
What do you play?
A: (After a pause of 15 seconds) R-R8 mate.
-- “Turing Test” by Turing
4. Voice One: “They made the machines.
That's what I'm trying to tell you.
Meat made the machines.”
Voice Two: "That's ridiculous.
How can meat make a machine?
You're asking me to believe in sentient meat."
-- "They're made out of meat” Terry Bisson
19. 「エンタープライズと機械学習」
Part II 「機械学習のアルゴリズム」
昨年のマルレク「エンタープライズと機械学習」
http://bit.ly/1tqiKUJ (2014年11月25日)の
Part IIに主要なアルゴリズムの簡単な紹介がある。
Gradient Descent
Decision Tree
Boosted Decision Tree
Support Vector Machine
51. Now You Can Build Google’s $1M
Artificial Brain on the Cheap
On Monday, he’s publishing a paper that
shows how to build the same type of
system for just $20,000 using cheap,
but powerful, graphics microprocessors,
or GPUs. It’s a sort of DIY cookbook on
how to build a low-cost neural network.
http://www.wired.com/2013/06/andrew_ng/
GPUを利用した、NNのコモディティ化
2013年6月
73. Convolutional Networks: 1989
LeNet: a layered model composed of
convolution and subsampling operations
followed by a holistic representation and
ultimately a classifier for handwritten
digits. [ LeNet ]
ネットワーク記述言語登場の背景
基本パターンの確立と一層の複雑化
74. Convolutional Nets: 2012
AlexNet: a layered model composed of
convolution, subsampling, and further
operations followed by a holistic
representation and all-in-all a landmark
classifier on
ILSVRC12. [ AlexNet ]
+ data
+ gpu
+ non-saturating
nonlinearity
+ regularization
ネットワーク記述言語登場の背景
基本パターンの確立と一層の複雑化
75. Convolutional Nets: 2014
ILSVRC14 Winners: ~6.6% Top-5
error
- GoogLeNet: composition of multi-
scale dimension-reduced modules
(pictured)
- VGG: 16 layers of 3x3 convolution
interleaved with max pooling + 3
fully-connected layers
+ depth
+ data
+ dimensionality reduction
ネットワーク記述言語登場の背景
基本パターンの確立と一層の複雑化
81. require ‘nn’
net = nn.Sequential()
net:add(nn.SpatialConvolution(1, 6, 5, 5))
-- 1 input image channel, 6 output channels, 5x5 convolution kernel
net:add(nn.SpatialMaxPooling(2,2,2,2))
-- A max-pooling operation that looks at 2x2 windows and finds the max.
net:add(nn.SpatialConvolution(6, 16, 5, 5))
net:add(nn.SpatialMaxPooling(2,2,2,2))
net:add(nn.View(16*5*5))
-- reshapes from a 3D tensor of 16x5x5 into 1D tensor of 16*5*5
net:add(nn.Linear(16*5*5, 120))
-- fully connected layer (matrix multiplication between input and weights)
net:add(nn.Linear(120, 84))
net:add(nn.Linear(84, 10))
-- 10 is the number of outputs of the network (in this case, 10 digits)
net:add(nn.LogSoftMax())
-- converts the output to a log-probability. Useful for classification problems
print('Lenet5n' .. net:__tostring());
82. require ‘nn’
net = nn.Sequential()
net:add(nn.SpatialConvolution(1, 6, 5, 5))
-- 1 input image channel, 6 output channels, 5x5 convolution kernel
net:add(nn.SpatialMaxPooling(2,2,2,2))
-- A max-pooling operation that looks at 2x2 windows and finds the max.
net:add(nn.SpatialConvolution(6, 16, 5, 5))
net:add(nn.SpatialMaxPooling(2,2,2,2))
net:add(nn.View(16*5*5))
-- reshapes from a 3D tensor of 16x5x5 into 1D tensor of 16*5*5
net:add(nn.Linear(16*5*5, 120))
-- fully connected layer (matrix multiplication between input and weights)
net:add(nn.Linear(120, 84))
net:add(nn.Linear(84, 10))
-- 10 is the number of outputs of the network (in this case, 10 digits)
net:add(nn.LogSoftMax())
-- converts the output to a log-probability. Useful for classification problems
print('Lenet5n' .. net:__tostring());
86. 2. ニューラル・ネットワークを定義する
require ‘nn’
net = nn.Sequential()
net:add(nn.SpatialConvolution(3, 6, 5, 5))
-- 3 input image channel, 6 output channels, 5x5 convolution kernel
net:add(nn.SpatialMaxPooling(2,2,2,2))
-- A max-pooling operation that looks at 2x2 windows and finds the max.
net:add(nn.SpatialConvolution(6, 16, 5, 5))
net:add(nn.SpatialMaxPooling(2,2,2,2))
net:add(nn.View(16*5*5))
-- reshapes from a 3D tensor of 16x5x5 into 1D tensor of 16*5*5
net:add(nn.Linear(16*5*5, 120))
-- fully connected layer (matrix multiplication between input and weights)
net:add(nn.Linear(120, 84))
net:add(nn.Linear(84, 10))
-- 10 is the number of outputs of the network (in this case, 10 digits)
net:add(nn.LogSoftMax())
-- converts the output to a log-probability. Useful for classification problems
print('Lenet5n' .. net:__tostring());
87. 3. 損失関数を定義する
4. ネットワークを訓練する
criterion = nn.ClassNLLCriterion()
-- Log-likelihood Classificationを使う
trainer = nn.StochasticGradient(net, criterion)
trainer.learningRate = 0.001
trainer.maxIteration = 5 -- just do 5 epochs of training.
trainer:train(trainset)
88. 5. ネットワークをテストし、精度を
チェックする
print(classes[testset.label[100]])
itorch.image(testset.data[100])
-- Let us display an image from the test set to get familiar.
testset.data = testset.data:double()
-- convert from Byte tensor to Double tensor
for i=1,3 do -- over each image channel
testset.data[{ {}, {i}, {}, {} }]:add(-mean[i])
-- mean subtraction
testset.data[{ {}, {i}, {}, {} }]:div(stdv[i])
-- std scaling
end
89. “Torch Deep Learning Module”
のオープンソース化の発表
“FAIR open sources deep-learning modules
for Torch” 2015年1月16日
https://goo.gl/Yg98OO
95. 参考資料
“Torch -- A scientific computing framework for
LuaJIT” http://torch.ch/
“Getting started with Torch”
http://goo.gl/J7NKxC
“Deep Learning with Torch: the 60-minute blitz”
https://goo.gl/A76kjA
“Neural Network Graph Package”
https://github.com/torch/nngraph
96. 参考資料
“Fast Convolutional Nets With fbfft: A GPU
Performance Evaluation” http://goo.gl/PLDanO
“Fast Training of Convolutional Networks
through FFTs” http://goo.gl/B2qFVq
“Training an Object Classifier in Torch-7 on
multiple GPUs over ImageNet”
https://goo.gl/b14XOI
101. Blob storage and communication
// Assuming that data are on the CPU initially, and we have a blob.
const Dtype* foo;
Dtype* bar;
foo = blob.gpu_data(); // data copied cpu->gpu.
foo = blob.cpu_data();
// no data copied since both have up-to-date contents.
bar = blob.mutable_gpu_data(); // no data copied.
// ... some operations ...
bar = blob.mutable_gpu_data();
// no data copied when we are still on GPU.
foo = blob.cpu_data();
// data copied gpu->cpu, since the gpu side has modified the data
foo = blob.gpu_data();
// no data copied since both have up-to-date contents
bar = blob.mutable_cpu_data(); // still no data copied.
bar = blob.mutable_gpu_data(); // data copied cpu->gpu.
bar = blob.mutable_cpu_data(); // data copied gpu->cpu.
104. layer {
name: "loss"
type: "SoftmaxWithLoss”
bottom: "pred”
bottom: "label”
top: "loss"
}
layer {
name: "loss"
type: "SoftmaxWithLoss”
bottom: "pred”
bottom: "label”
top: "loss”
loss_weight: 1
}
loss := 0
for layer in layers:
for top, loss_weight in layer.tops, layer.loss_weights:
loss += loss_weight * sum(top)
Loss
105. Solver
1. scaffolds the optimization bookkeeping and
creates the training network for learning and
test network(s) for evaluation.
2. iteratively optimizes by calling forward /
backward and updating parameters
3. (periodically) evaluates the test networks
4. snapshots the model and solver state
throughout the optimization
106. 1. calls network forward to compute the output
and loss
2. calls network backward to compute the
gradients
3. incorporates the gradients into parameter
updates according to the solver method
4. updates the solver state according to learning
rate, history, and method
107. Solver prototxt file
base_lr: 0.01 # begin training at a learning rate of 0.01 = 1e-2
lr_policy: "step" # learning rate policy: drop the learning rate in "steps”
# by a factor of gamma every stepsize iterations
gamma: 0.1 # drop the learning rate by a factor of 10
# (i.e., multiply it by a factor of gamma = 0.1)
stepsize: 100000 # drop the learning rate every 100K iterations
max_iter: 350000 # train for 350K iterations total
momentum: 0.9
109. layer {
name: "conv1”
type: "Convolution”
bottom: "data”
top: "conv1”
# learning rate and decay multipliers for the filters
param { lr_mult: 1 decay_mult: 1 }
# learning rate and decay multipliers for the biases
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 96 # learn 96 filters
kernel_size: 11 # each filter is 11x11
stride: 4 # step 4 pixels between each filter application
weight_filler {
type: "gaussian" # initialize the filters from a Gaussian
std: 0.01 # distribution with stdev 0.01 (default mean: 0)
}
bias_filler {
type: "constant" # initialize the biases to zero (0)
value: 0
}
}
}
Vision Layers / Convolution
110. layer {
name: "pool1”
type: "Pooling”
bottom: "conv1”
top: "pool1”
pooling_param {
pool: MAX
kernel_size: 3 # pool over a 3x3 region
stride: 2 # step two pixels (in the bottom blob)
# between pooling regions
}
}
Vision Layers / Pooling
113. layer {
name: "mnist"
# Data layer loads leveldb or lmdb storage DBs for high-throughput.
type: "Data"
# the 1st top is the data itself: the name is only convention
top: "data" # the 2nd top is the ground truth: the name is only convention
top: "label" # the Data layer configuration
data_param {
# path to the DB
source: "examples/mnist/mnist_train_lmdb"
# type of DB: LEVELDB or LMDB (LMDB supports concurrent reads)
backend: LMDB
# batch processing improves efficiency.
batch_size: 64
}
# common data transformations
transform_param {
# feature scaling coefficient: this maps the [0, 255] MNIST data to [0, 1]
scale: 0.00390625
}
}
Data Layer 定義サンプル
128. Google Now API ?
音声質問応答システムとしてのGoogle NowのAPIは、
公開されていない。
"Google Now API Will Soon Be Available to
Developers to Put in Their Apps"「Google Nowの
APIが、すぐにでもアプリ開発者に利用可能に ...」
http://goo.gl/X42UuO
こんな報道も。"An Open Google Now Is About to
Make Android Super Smart" 「オープンなGoogle
Nowは、Android をとびきり賢くしようとしている」
http://goo.gl/WsGMSo
ただ、注意して欲しいのは、これは、音声質問応答システ
ムとしてのGoogle NowのAPIのオープン化のことでは
ないということだ。
129. Google Nowの二つの顔
Google Nowには、二つの顔がある。一つの顔は、音声
応答システムだが、もう一つの顔は、カード・ベースの
ユーザー支援システムである。Googleがオープン化をし
ようとしているのは、後者のGoogle Nowである。
カード・ベースのユーザー支援システムは、Gmailや
Google カレンダーやGoogle MapやGoogle検索と
いったアプリ(現在は、Googleのアプリが中心である)の
利用履歴や情報から、ユーザーに有用な情報をまとめて
カードの形式で通知する。この仕掛けは、ユーザーのプラ
イバシー保護との関係では、ギリギリの所をついていて、
その意味ではとても興味ふかいのだが、それについては
別の機会に触れたい。
152. VCDのインストールと登録
VCDのインストールは、アプリの起動時に行われる。
protected override async void OnLaunched
(LaunchActivatedEventArgs e) in App.xaml.cs
var storageFile = await
Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(
new Uri(“ms-appx:///QuickstartCommands.xml”));
await Windows.Media.SpeechRecognition.VoiceCommandManager.
InstallCommandSetsFromStorageFileAsync(storageFile);
もちろん、イベント・ドリブンの
プログラミング・スタイルである。
153. イベントのチェックと読み取り
Voice CommandがActiveになった時、呼び出される。
protected override void OnActivated
(IActivatedEventArgs e) in App.xaml.cs
// Was the app activated by a voice command?
if (e.Kind !=
Windows.ApplicationModel.Activation.ActivationKind.VoiceCommand)
{ return; }
var commandArgs = e as
Windows.ApplicationModel.Activation.VoiceCommandActivatedEventArgs;
Windows.Media.SpeechRecognition.SpeechRecognitionResult
speechRecognitionResult = commandArgs.Result;
もちろん、イベント・ドリブンの
プログラミング・スタイルである。
154. // The commandMode is either “voice” or “text”, and it indicates
// how the voice command was entered by the user.
// We should respect "text" mode by providing feedback in a silent form.
string commandMode = this.SemanticInterpretation(
"commandMode", speechRecognitionResult);
// If so, get the name of the voice command, the actual text spoken,
// and the value of Command/Navigate@Target.
string voiceCommandName = speechRecognitionResult.RulePath[0];
string textSpoken = speechRecognitionResult.Text;
string navigationTarget = this.SemanticInterpretation(
"NavigationTarget", speechRecognitionResult);
Type navigateToPageType = typeof(MainPage);
string navigationParameterString = string.Empty;
様々なパラメーターの取得
156. case "playAMovie":
string movieSearch = this.SemanticInterpretation(
"movieSearch", speechRecognitionResult);
navigateToPageType = typeof(PlayAMoviePage);
navigationParameterString = string.Format("{0}|{1}",
commandMode, movieSearch);
break;
default:
// There is no match for the voice command name.
break;
}
157. <?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns=
"http://schemas.microsoft.com/voicecommands/1.1">
<CommandSet xml:lang="en-us" Name="commandSet_en-us">
<CommandPrefix> Quickstart, </CommandPrefix>
<Example> Show sports section </Example>
<Command Name="showASection">
<Example> show sports section </Example>
<ListenFor> [show] {newspaperSection} [section] </ListenFor>
<Feedback> Showing the {newspaperSection} section </Feedback>
<Navigate Target="ShowASectionPage.xaml"/>
</Command>
<Command Name="goToASection">
<Example> go to the sports section </Example>
<ListenFor> [go to] [the] {newspaperSection} [section] </ListenFor>
<Feedback> Going to the {newspaperSection} section </Feedback>
<Navigate Target="ShowASectionPage.xaml"/>
</Command>
Voice Command Definition File
158. <Command Name="message">
<Example> message Avery I'm running late </Example>
<ListenFor> message {contact} {msgText} </ListenFor>
<Feedback> Messaging {contact} {msgText} </Feedback>
<Navigate Target="MessagePage.xaml"/>
</Command>
<Command Name="text">
<Example> text Avery I'm running late </Example>
<ListenFor> text {contact} {msgText} </ListenFor>
<Feedback> Texting {contact} {msgText} </Feedback>
<Navigate Target="MessagePage.xaml"/>
</Command>
<Command Name="playAMovie">
<Example> Play Casablanca </Example>
<ListenFor> Play {movieSearch} </ListenFor>
<Feedback> Playing {movieSearch} </Feedback>
<Navigate Target="PlayAMoviePage.xaml"/>
</Command>
159. <PhraseList Label="newspaperSection">
<Item> national news </Item>
<Item> world news </Item>
<Item> sports </Item>
</PhraseList>
<PhraseList Label="contact">
<Item> Avery </Item>
<Item> Monica </Item>
<Item> Rob </Item>
</PhraseList>
<PhraseTopic Label="msgText" Scenario="Short Message"/>
<PhraseTopic Label="movieSearch" Scenario="Search">
<Subject>Movies</Subject>
</PhraseTopic>
</CommandSet>
<!-- Other CommandSets for other languages -->
</VoiceCommands>
161. Amazon EchoとAlexa
Amazon Echoの振る舞いについては、"Getting
Started with the Alexa Skills
Kit" https://goo.gl/0AdR1T
"Alexa Skills Kit Voice Design
Handbook" https://goo.gl/wSrkdq
Resources https://goo.gl/h9VTEx
162. Alexa 会話サンプル
User: “Alexa, tell Greeter to say hello”
Alexa: “Hello World!”
User: “Alexa, ask History Buff what happened
on August 20th”
Alexa: (Reads back three events, in reverse
chronological order) “Want to go deeper in
history?”
User: “Yes”
Alexa: (Reads back next set of three
events) “Want to go deeper in history?”
User: “No”
163. Alexa 会話サンプル
User: “Alexa, ask savvy consumer for top
books.”
Alexa: “Getting the best sellers for books. The
top seller for books is…(reads top
seller)…Would you like to hear more?”
User: “Yes”
Alexa: (reads back three book titles) “Would
you like to hear more?”
User: “No”
User: “Alexa, tell session my color is green.”
Alexa: “I now know that your favorite color is
green…”
164. Alexa 会話サンプル
User: “Alexa, tell score keeper to reset.”
Alexa: “New game started without players.
Who do you want to add first?”
User: “Add the player Bob”
Alexa: “Bob has joined your game”
User: “Add the player Jeff”
Alexa: “Jeff has joined your game”
(service saves the new game and ends)
User: “Alexa, tell score keeper to give red team
three points.”
Alexa: “Updating your score, three points for
red team”
(service saves the latest score and ends)
165. 主要なコマンド Ask と Tell
Ask recipes how do I make an omelet?
Ask daily horoscopes what’s the horoscope for
Taurus
Ask daily horoscopes to give me the
horoscope for Taurus.
Ask daily horoscopes about Taurus
Tell scorekeeper to give ten points to Stephen
Tell scorekeeper that Stephen has ten points.
Tell scorekeeper
166. その他のコマンド
Talk to <invocation name>
Talk to <invocation name> and <command>
Open <invocation name>
Open <invocation name> and <command>
Launch <invocation name>
Launch <invocation name> and <command>
Start <invocation name>
Start <invocation name> and <command>
Use <invocation name>
Use <invocation name> and <command>
....
173. helloworld.speechAsset.
SampleUtterances.txt
HelloWorldIntent say hello
HelloWorldIntent say hello world
HelloWorldIntent hello
HelloWorldIntent say hi
HelloWorldIntent say hi world
HelloWorldIntent hi
HelloWorldIntent how are you
HelpIntent help
HelpIntent help me
HelpIntent what can I ask you
HelpIntent get help
HelpIntent to help
HelpIntent to help me
intentName 発話
cortanaのListenFor
と同じ働きをする。それぞ
れの発話に、intent
Nameが付けられている。
“say hello”と“how are
you”は、同じintent
Nameが割り当てられて
いる。
174. helloworld.speechAsset.
SampleUtterances.txt
HelpIntent what commands can I ask
HelpIntent what commands can I say
HelpIntent what can I do
HelpIntent what can I use this for
HelpIntent what questions can I ask
HelpIntent what can you do
HelpIntent what do you do
HelpIntent how do I use you
HelpIntent how can I use you
HelpIntent what can you tell me
if ("HelloWorldIntent".equals(intentName)) {
return getHelloResponse();
175. getHelloResponse
private SpeechletResponse getHelloResponse() {
String speechText = "Hello world";
// Create the Simple card content.
SimpleCard card = new SimpleCard();
card.setTitle("HelloWorld");
card.setContent(speechText);
// Create the plain text output.
PlainTextOutputSpeech speech = new PlainTextOutputSpeech();
speech.setText(speechText);
return SpeechletResponse.newTellResponse(speech, card);
}
176. getHelpResponse
private SpeechletResponse getHelpResponse() {
String speechText = "You can say hello to me!";
// Create the Simple card content.
SimpleCard card = new SimpleCard();
card.setTitle("HelloWorld");
card.setContent(speechText);
// Create the plain text output.
PlainTextOutputSpeech speech = new PlainTextOutputSpeech();
speech.setText(speechText);
// Create reprompt
Reprompt reprompt = new Reprompt();
reprompt.setOutputSpeech(speech);
return SpeechletResponse.newAskResponse(speech, reprompt, card);
}
178. HelloWorldSpeechletRequestStr
eamHandler
public final class HelloWorldSpeechletRequestStreamHandler
extends SpeechletRequestStreamHandler {
private static final Set<String> supportedApplicationIds =
new HashSet<String>();
static {
/*
* This Id can be found on
* https://developer.amazon.com/edw/home.html#/"Edit" the relevant
* Alexa Skill and put the relevant Application Ids in this Set.
*/
supportedApplicationIds.add(
"amzn1.echo-sdk-ams.app.[unique-value-here]");
}
public HelloWorldSpeechletRequestStreamHandler() {
super(new HelloWorldSpeechlet(), supportedApplicationIds);
}
}
179. Alexa History Buff サンプル
User: “Alexa, ask History Buff what
happened on August 20th”
Alexa: (Reads back three events, in reverse
chronological order) “Want to go deeper in
history?”
User: “Yes”
Alexa: (Reads back next set of three
events) “Want to go deeper in history?”
User: “No”
Wikipediaのデータを利用している。月日についての
SampleUtterances.txtが、すごいことになっている。
コードの断片は、Appendixに。
180. Alexa Score Keeper サンプル
User: “Alexa, tell score keeper to give red
team three points.”
Alexa: “Updating your score, three points
for red team”
(service saves the latest score and ends)
セッションの管理参考になるかも。チーム、点数のデータは
DynamoDBに格納されている。コード断片は、Appendixに。
209. Slot Grammar Lexicon
ここで、見出語は、accessである。この見出語について二
つの要素 v obj と n (p to) が登録されている。
vはverbで「動詞」を表し、nはnounで「名詞」を表してい
る。これらは、見出語の「品詞」にあたる。この表現は、
accessは、動詞と名詞があることを示している。
access 見出語
< v obj v は動詞
< n (p to) n は名刺
品詞 v, n のあとに続く obj, (p n)は、スロットを表して
いる。
210. Slot Grammar Lexicon
次のエントリー
give < v obj iobj
は、giveは動詞(v)で、二つのスロット obj と iobj を取
ることを表している。
Alice gave the book to Bob.
Alice gave Bob the book.
211. 最初の文では、スロット obj は、名詞句(NP) the book
で満たされ、スロット iobj は前置詞句(PP) to Bobで満
たされている。二つ目の文では、スロット iobj は、異なる
位置で、名詞句 Bobで満たされ、スロット obj は、やはり
異なる位置で、名詞句 the bookで満たされている。
二つの文で、iobjを満たしているものは、NPとto-PPで異
なるのだが、満たしているものの論理的役割は同じなの
で、二つの文、どちらでも、Bobが本を与えられた人物で
あることがわかる。
221. Wikipedia Infobox と
DBpedia
DBpedia is a crowd-sourced community
effort to extract structured information from
Wikipedia and make this information
available on the Web.
223. {{Infobox scientist
| name = Marie Skłodowska-Curie
| image = Marie Curie c1920.png
| image_size=220px
| caption = Marie Curie, ca. 1920
| birth_date = {{birth date|1867|11|7|df=y}}
| birth_place = [[Warsaw]], [[Congress Poland|Kingdom of Poland]],
then part of [[Russian Empire]]<ref>http://www.nobelprize.org/nobel_prizes/
| death_date = {{death date and age|df=yes|1934|7|4|1867|11|7}}
| death_place = [[Passy, Haute-Savoie]], France
| residence = [[Poland]] and [[France]]
| citizenship = Poland<br />France
| field = [[Physics]], [[Chemistry]]
| work_institutions = [[University of Paris]]
| alma_mater = University of Paris <br />[[ESPCI]]
| doctoral_advisor = [[Henri Becquerel]]
| doctoral_students = [[André-Louis Debierne]]<br />[[Óscar Moreno]]<br />
| known_for = [[Radioactivity]], [[polonium]], [[radium]]
| spouse = [[Pierre Curie]] (1859–1906)
| prizes = {{nowrap|[[Nobel Prize in Physics]] (1903)<br />[[Davy Medal]] (
| footnotes = She is the only person to win a [[Nobel Prize]] in two different s
| religion = Agnostic
| signature = Marie_Curie_Skłodowska_Signature_Polish.jpg
}}
Marie CurieのInfoboxでの記述
227. Marie Curieの検索の要約
Marie Skłodowska-Curie was a French-Polish
physicist and chemist, famous for her
pioneering research on radioactivity. She was
the first person honored with two Nobel
Prizes—in physics and chemistry. Wikipedia
Born: November 7, 1867, Warsaw
Died: July 4, 1934, Sancellemoz
Spouse: Pierre Curie (m. 1895–1906)
Discovered: Polonium, Radium
Children: Irène Joliot-Curie, Ève Curie
Education: University of Paris (1894),
University of Paris (1893), More
233. Question analysis:
How Watson reads a clue
「質問の分析:Watsonは、どのようにヒントを
読むのか?」
A. Lally, J. M. Prager, M. C. McCord, B. K.
Boguraev, S. Patwardhan, J. Fan, P. Fodor,
and J. Chu-Carroll
http://brenocon.com/watson_special_issue/02%20q
uestion%20analysis.pdf