Questions and Comments for Keynote


Questions and Comments for Keynote

  • Question by Jacob Barhak:
To what level can Watson already read and extract data from published clinical papers? For example, can it read and analyze table 1 in a clinical trial paper to extract average age or average blood pressure at baseline for each cohort?
Response: Watson can only process English, so graphs and charts will not be interpreted. We have other tools that do numerical analyses that can be fed into Watson, but Watson will not do it itself.
  • Question from Herbert Sauro:
How does Watson compare with a simple Google search? I am sure a Google search could have identified the person in the clue: "In May 1898 Portugal celebrated the 400th anniversary if this explorer's arrival in india". Is Watson orders of magnitude better, an incremental improvement or something in between?
Response: Depends on how you look at it, very different from google in certain features. Let me use a different search engine for comparison—NLM Pubmed. When you search it returns a long list of articles, but then you have to go back and read abstracts, then you have to go through articles that might be helpful. Spend a lot of time, get lots of sources, but Watson comes back with ideas from articles, suggestions to be considered, then you can go back to reading. Google gives you list, but if you don’t find answer right away, you must go back to the literature.
  • Question from Ahmet Erdemir:
While Watson provides a synthesis of current knowledge, what is its potential utility to generate new knowledge? Or, its potential to compare innovation breakthroughs against established knowledge?
Response: Watson’s ability is to read huge volumes. For example, a patient who is short of breath. What is the reason? What can I do to make him better? Looks at multiple problems, and determines how to come up with optimum plan to minimize negative effects of one treatment on another disease. Plans are for Watson to read articles about patients with multiple information and extract infomation to infer decisions. For clinical decision support—both knowledge (literature) and data—we have tool called patient similarity analysis (finds similar patients—medical history, demographic information); find cohort of patients similar to your patient sharing multiple characteristics. Separating into a cohort of patients; Watson may do this by reading large volumes, but other tools at IBM can do this in other ways.
  • Question from Raj Vadigepalli:
When generating suggestions, is there emphasis on generating distinct ones? Does the data/experience across multiple providers get integrated before it becomes published/public knowledge?
Response: Analyses will go through multiple paths, but in the end a single list of suggestions will be given. Confidence Intervals will be determined through different paths, and to an extent be crowd sourced. If Watson finds a suggestion in only one journal, but a very reliable, it might bring forward the suggestion, but have a lower CI for it. Will suggestions be distinct?—Most will be distinct, but sometimes there will be overlap. Watson will get better over time. Early on may not distinguish between similar diagnoses.
  • Question from Ronan Fleming:
Will Watson be able to assign journals a "Clinical Impact Factor"?
Response: Still a little early in this process; journal impact factor only one bit of Watson’s uses. Ultimately Watson will decide itself whether the source is valid. Currently teaching Watson how to evaluate individual articles, e.g. give more credence to one with a large "n value". This is what we call informed evidence analysis. Watson is taught to evaluate both individual analysis and broad literature.
  • Question from Jennifer Linderman:
How could Watson use information that might be most clear to a physician that knows the patient very well - e.g. that a certain complaint is very unusual and noteworthy for one patient, who rarely complains, but is with the range of normal for another patient?
Response: This is important and one of the reasons we don’t view Watson as deterministic. The relationship between you and the patient, that you understand better, cannot be explained by a computer. Hoping that Watson may learn that not everyone comments on pain levels the same, and Watson could adjust pain levels to patient behavior. Some people think the future is for Watson to make decisions, but Martin Kohn does not agree.
  • Question from Louis Gross:
Watson is a combination of machine learning and expert system. In establishing the ranking score for Watson suggestions that go to the "experts" is there a plan for robustness of the ranking? In environmental systems we have developed a methodology for relative assessment of scenarios that uses extensive parallel computation across uncertainties of inputs and models to evaluate robustness of rankings.

Grace's Notes

Marty Kohn, IBM described the move towards more patient-centered, evidence supported information needed to make better decisions – Precision Medicine. Watson is a NLP tool; it understands English, reading huge volumes of text-like literature (in 1.5-3 seconds). Watson progress: tabulation, programmatic computing, cognitive systems. Watson for Jeopardy - Understands, generates and evaluates hypothesis, adapts and learns (machine learning systems) – produces a number that describes a confidence threshold that is “relevant to the decision”. Watson has a reliability index for stored information. Massively parallel algorithms to process ever part of the question, but doesn’t remember past question/answers, but learns over time.

Watson for healthcare – breaks down and understands the question, looks for answer sources, evaluate each of the possible answers, never 100% confident, but high confidence levels. Healthcare reasoning - interactive: temporal reasoning (sequence of timing of symptom), geospatial reasoning (anatomic), statistical paraphrasing (mapping between medical terminology and lay terms). Watson knows what it doesn’t know (humans don’t). Healthcare often is NOT a single answer. Human behavior intercepts in making accurate decisions, results in errors and poor decisions. Common human behaviors: flaw of availability, anchor bias (looking for supporting evidence to self-reinforce bias). Look at human behavior that causes errors and use Watson to avoid these errors. Technology at its best is an enabler, to encourage behaviors and processes that ensure better outcomes. Technology itself does not drive change. Humans are better at taking a bunch of ideas and determines which on is relevant. Watson is the shallow reasoned teacher, unbiased, learns over time, and tells you what information is missing. EHR – doesn’t tell you what information is missing, passive, doesn’t offer suggestions for diagnosis, not interactive. Watson needs to understand more complex discussions, needs a training set. Watson’s interface needs to be useful and convenient to the users. What are the physicians’ preferences, what are the patients’ preferences? IBM is/will work with payers and providers. Watson would be available through the cloud.



Add your discussion here!

Table sorting checkbox