As we have discussed each component of choosing an appropriate research topic in our "Learn from Experts" section, it can be extrapolated that a good approach is to start by analyzing recent trends. Hence, in order to propose the most appropriate topic we have analyzed a list of grants and funds awarded to studies in last year. This will give you a clear perspective on what is the current trajectory of academia related to certain disciplines and focus areas.
Primarily, funding provides a bottom line for the importance of a certain kind of domain within a focus area. Organisations award funds to those research studies that are of high value to the community and can play a major role in pushing forward the current standing in a certain discipline.
Here is a list of Seed Grant Awards by HAI at Stanford University (Check further details: https://hai.stanford.edu/research/grant-opportunities). These topics can inspire you to choose the best one for you.
Artificial Intelligence for Scientific Discovery
We are interested in using AI methods to estimate and infer causal effects in various settings. First, we consider settings where multi-armed bandits are used to determine optimal treatments. Many questions remain concerning the ability to use ex-post the data generated by multi-armed bandit algorithms to draw robust causal inferences, in particular for heterogeneous effects in contextual bandits. Second, we wish to use AI to mine for causal effects. Consider a setting where a product is sold in a market at a price chosen by a seller. We wish to estimate the demand function, the expected value of the demand at each price, including those different from the actual realized prize in the market. Such a demand function would capture the causal effect of price on demand. The relation between historial quantity and price data is confounded because the seller chooses the price to maximize profits, using information about anticipated demand, rather than set the price randomly. To make progress we need to train the AI to identify unanticipated price changes.
A Causal Decision-Learning Approach for Identifying Cost-Effective Clinical Pathways
In response to growing pressure to control their costs, many US healthcare providers have been producing detailed internal cost data. Specifically, many US hospitals have adopted cost accounting systems that generate cost data at the clinical service item-level. Our project will leverage this granular cost data to learn clinical decisions and pathways that minimize overall episode costs, while not compromising quality of care. The project seeks to answer the following question: for a given condition, what clinical services should be provided to the patient, and in what sequence should the clinical services be provided, in order to minimize costs, while maintaining high quality of care. The goal of the research is to inform clinical practice.
Virtual Multisensory Interaction: From Robots to Humans
Current Virtual Reality technology focuses on a faithful visual representation of the real world. We propose to enable truly immersive virtual experiences for online shopping, training, and entertainment in which a user can virtually interact with physical objects and environments and receive a more complete real-time multisensory (visual, tactile, and audio) feedback. The current roadblocks to achieving this goal are in (1) the acquisition of data to develop realistic virtual worlds, and (2) the multi-sensory devices required to display these worlds to human users. Both of these challenges can be addressed through the development of new robotic, modelling and data-driven technologies.
Statistical Machine Learning for Understanding and Improving Social Mobility Among the Poor: A Precision Approach to Intervention
The National Poverty Study (NPS) will gather evidence related to the many sources of poverty from a nationally representative sample of poor and middle class households. Using a randomized trial, it tests also investigates an intervention designed to help individuals’ future economic outcomes. In our work we will leverage and extend counterfactual machine learning methods to analyze the pilot data from this study. Our hope is to gain insights into the individual and contextual features that determine the success of poverty interventions and the predictors of mobility in general.
Understanding and Addressing Ethical Challenges with Implementation of Machine Learning to Advance Palliative Care
The understanding of the specific ethical issues emerging with machine learning (ML) in healthcare delivery has so far been limited by the lack of examinable clinical ML implementations. In this project, we propose to examine the ethical challenges emerging with implementation of an ML system to predict 3-12 month mortality and guide outreach by Palliative Care Clinicians to the treating team. The proposed research will provide observations and feedback from all stakeholders (researchers, designers, clinicians and patients) involved in this implementation. Once the values of different stakeholders, and how these values influence design and implementation decisions, can be understood ethical pitfalls associated with the implementation can be identified and strategies to minimize potential harms can be employed.
Administering by Algorithm: Artificial Intelligence in the Regulatory State
This project will explore the growing role that artificial intelligence (AI) and machine learning (ML) are playing in the federal administrative state. Such uses will almost certainly increase as AI becomes more sophisticated and less expensive. The first part of the report will paint a comprehensive descriptive portrait of AI use by federal agencies. The second part will illuminate and help resolve the many legal and policy-analytic questions that policymakers, agency administrators, judges, and lawyers will have to ask as agency use of AI proliferates. What implications do machine-driven tasks have for existing principles of constitutional and administrative law? How can agency AI usage reshape the administrative state, and what should agencies and society consider doing in response? And how can agency use of AI be made to conform, where possible, to norms around public deliberation about administrative decisions?
RoboIterum: An Augmented Reality Interface for Iterative Design of Situated Interactions with Intelligent Robots
We envision a future in which robots collaborate with us and augment our capabilities. This highlights the need for effortless programming of contextually-aware robots; however, engineers face many challenges. Firstly, autonomous robots operate based on complex sensing and learning algorithms. Moreover, many interdependent parameters determine robot behavior, and engineers are unable to effectively explore the large, complex space of possible outcomes. Finally, due to the situatedness of interactions and the uncertainty of human behavior, predicting how the robot affects and is affected by people and the environment is a challenge. We investigate usage of spatial information visualization techniques through a head-mounted Augmented Reality interface for understanding, programming, and debugging intelligent mobile robots. Our tool provides spatially situated visualizations that make the internal current and future states of the robot and its relation to people, objects, and the environment. Using this tool, engineers can make sense of the large parameter space, explore a range of alternative scenarios, and observe their predicted outcomes.
Urbanization at the Margins: Edges & Seams in the Global South
An unprecedented volume of visual data is available for contemporary cities. By quantifying and classifying the visual field of cityscapes with the help of AI and computer vision, our research seeks to ground-truth claims about the direct effects of physical infrastructure and institutions on built environments and urban agriculture in peripheral zones. We seek to deploy these techniques in order to measure hard-to-quantify aspects of urban environments, particularly in cities where other spatialized micro data are not available or not provided in commensurable units. Our empirical assessment will focus on edges or seams in cities in the global south. Edges: where the city begins to fade into the country. Seams: where major highways, rivers, and institutional divisions (zones) demarcate space. Edges may provide insight into future patterns of urbanization, environmental change, and shifts in access to farmland and food security. Seams reveal how human and natural features influence the built environment and factor into the production of housing vulnerability.
Deep Neural Network for Real-Time EEG Decoding of Musical Rhythm Imagery: Towards a Brain-Computer Interface Application for Stroke Rehabilitation
Data-driven sound creation is a central feature of computer music. An EEG-based drum machine will explore the potential for musical rhythm imagery classification to operate a musical device. Interestingly, neural activities during perception and imagery of temporal structures of music also overlap closely with those during motor imagery. Studies have explored the motor imagery task for Brain-Computer Interface (BCI) applications using noninvasive EEG. Convolutional Neural Networks (CNNs) can extract features from EEG during motor imagery to control a BCI apparatus in real time. Recently, research has shown that EEG is modulated during music imagery reflecting the speed of musical tempo. Furthermore, CNN-based classification is also successful for EEG during musical rhythm imagery. Thus, our idea is to create an EEG-BCI-based drum machine linked to musical rhythm imagery. We will train CNNs with open-source existing data during musical rhythm imagery tasks to recognize musical features and then incorporate the results into a prototype BCI system. This opens the door to the future BCIs with imagery for other musical features such as pitch, timbre, and harmony.
Multi-modal Inference in Brains, Minds, and Machines
Tobias Gerstenberg (Psychology), Justin Gardner (Psychology), Hyo Gweon (Psychology), Scott W. Linderman (Statistics, Neuroscience)
Humans build mental models of the world and use these models to infer the past, present, and future. Despite advances, modern AI still struggles to perform tasks that rely on common sense. The fundamental challenge is that they require learning causal models of the world and reasoning with them. In this project, we investigate how humans build and use mental models to make inferences based on sight and sound. Simulating mental models of the physical world generates internal representations of sensory data by conditioning on visual and auditory evidence. We study multi-modal inference that naturally elicits physical simulation, using eye-tracking to better understand the process of visual mental simulation, and fMRI to investigate auditory simulation. We also study the development of multi-modal inference in early childhood to understand how adult intuitive physical understanding develops. Together, these studies will help develop more powerful models of multi-modal inference in humans and provide the foundation for smarter, more human-like AI technology.
Video-based Real Time Monitoring of Respiratory Conditions in the Pediatric Emergency Department
Sumit Bhargava (Pediatrics), Dan Imler (Emergency Medicine), Peyton Greenside (Computer Science), Karan Goel (Computer Science)
Asthma is the most common chronic disease in children, causing a loss of 10 million school days annually in the United States for children aged 5-17. These children often come to Stanford’s Pediatric Emergency Department (PED), where they are monitored until their symptoms abate or an appropriate course of treatment is determined. Automated real-time monitoring of respiratory illnesses constitutes a critical opportunity to save time and reduce adverse events. Deep learning techniques, along with high-throughput sensing modalities such as audio, video and clinical vital signals, together enable a machine learning driven solution to improve both patient and clinician experience. Our interdisciplinary team will develop new algorithms to extract clinically relevant features from videos of respiratory distress in the PED. We will then provide real-time decision support in order to more rapidly decide which patients to admit, diminishing unnecessary time spent by patients in the PED. Ultimately, we hope to deploy our methods in places where access to high-quality healthcare remains a challenge.
Learning to Play: Understanding Infant Development with Intrinsically Motivated Artificial Agents
Daniel LK Yamins (Psychology), Fei-Fei Li (Computer Science), Mike Frank (Psychology), Nick Haber (Psychology), Damian Mrowca (Computer Science), Stephanie Wang (Computer Science), Kun Ho Kim (Computer Science), Eli Wang (Electrical Engineering), Bria Long (Psychology), Judy Fan (Psychology), George Kachergis (Psychology), Hyo Gweon (Psychology)
Within the first two years of life, human infants develop a remarkable suite of exploratory behaviors -- they begin to direct their own movement through their environment, they can shift their attention to events they find most interesting, and they interact with objects in complex, creative ways. This ability to explore and (re)structure their environment sets infants apart from even the most advanced robots today, which must be taught what to do in every scenario. The goal of this proposal is to develop computationally precise models of exploration and learning that mirror the trajectory of human infant development. Our ongoing strategy is to combine approaches from artificial intelligence, computational cognitive science, and developmental psychology.
Folk Theories of AI Systems: An Approach for Developing Interpretable AI
Jeffrey Hancock (Communication), Michael Bernstein (Computer Science), Sunny Liu (Communication), Danae Metaxa (Computer Science)
This project examines the folk theories that people have about AI. Folk theories refer to the intuitive and explanatory frameworks that people use to interpret and anticipate outcomes of complex systems, such as AI. In this project, we introduce the concept of folk theories to AI and 1) investigate the kinds of folk theories that people hold about complex AI-systems and 2) examine how these folk theories influence people’s behaviors, emotions and attitudes toward AI. The project will highlight how AI systems impact society not only through their direct outputs, but also through the malleable (and potentially problematic) folk theories that people hold about them. The findings will advance our understanding of how to create and deploy AI systems that are interpretable and human-centered, and inform the development of AI systems that classify the folk theory that the user holds, and adjusts their behavior to compensate or correct the user's model.
“Personality Design” in Artificial Intelligence-enabled Robots, Conversational Agents and Virtual Assistants
Pamela Hinds (Management Science and Engineering), Angele Christine (Communication), Prachee Jain (Management Science and Engineering)
Artificial Intelligence-enabled technologies like conversational agents, virtual assistants and social robots are becoming ubiquitous - assisting in medicine, giving information, helping in daily chores, or merely providing company. These technologies now include an additional ‘personality design.’ What personality design means to different team members, varies tremendously. Operationalizing this involves collaboration among multifunctional team members through industrial, conversational, sound, and interaction design, etc. What does it mean for designers, engineers, personality designers, and other members of multifunctional teams to design an AI-enabled technology with a personality? How is this technology designed to form and maintain relationship with not just one but several users, and how do these relationships change over time? We anticipate this inductive research to not just help build academic theory related to design process of technology and teamwork but also offer practical insights into how design intentions for AI-enabled technology form, evolve and are controlled over time.
Using AI to Safeguard Our Drinking Water
Kate Maher (Earth System Science), Jef Caers (Geological Sciences), Bill Mitch (Civil and Environmental Engineering), James Dennedy-Frank (Biology)
Estimates suggest that tens of millions of Americans rely on water systems that exceed at least one heath-based quality standard. To address this concern, our project will apply AI capable of recognizing changes and patterns in water quality. This is challenging for a number of reasons. Historical water quality data are sparse spatially and temporally and often do not capture extreme events, like floods. Current applications of machine learning often focus on one chemical species, only consider variation in space but not time, and do not account for the physical relationships observed within river and groundwater systems. This project will apply recurrent neural networks (RNN) to model and detect water quality changes over time, and then seek to integrate this framework with existing physics-based streamflow predictions. Our overarching objective is to determine if human-centered AI can ensure our basic right to safe, clean water by providing real-time water quality alert systems.
Predicting Learning Outcomes with Machine Teachers
Roy Pea (Education), Emma Brunskill (Computer Science), Bethanie Maples (Computer Science), Joyce He (Education)
Chatbots are increasing in functionality and popularity, and offer new directions for providing learning opportunities. Our goal is to understand how we may predict learning outcomes based on user social communication profiles (face-to-face, and on social media), and to understand how to adapt teaching agents for different learner profiles. Drawing on the computers as social agents framework developed by Nass and Reeves, and using the most widely downloaded non-task-oriented agent available for English speakers today, we explore how learners’ beliefs in agent identity, narrative, and learners’ motivations for use affect learning. We explore how social discussions and inquiry into well-being affects participants’ learning outside the scope of the social discussions. We conjecture that cognitive gains for learners are based on learner narratives about the agent, and degree of identity transfer between learner and agent. We will explore how learning might be increased for populations less likely to benefit from agent interactions through advanced conversational adaptivity.
Developing a Computational Approach for Identifying Characteristics of Psychotherapy
Bruce Arnow (Psychiatry and Behavioral Sciences), Stewart Agras (Psychiatry and Behavioral Sciences), Nigam Shah (Medicine, Biomedical Data Science), Fei-Fei Li (Computer Science), Adam Miner (Psychiatry and Behavioral Sciences), Albert Haque (Computer Science), Michelle Guo (Computer Science)
Clinical depression is common, and costly. Although psychotherapy generally is effective, no single approach has proven superior. Additionally, specific clinical tactics have historically been expensive or time-consuming to label and assess. Despite clinical trials demonstrating psychotherapy efficacy, little is know about the specific therapist behaviors that account for good patient outcomes. We seek to develop a privacy focused computational approach for identifying characteristics of psychotherapy that lead to good patient outcomes. If successful, our approach will identify potent mechanisms of change in psychotherapy that can be used to improve training, and deliver simpler, more effective forms of psychotherapy.
Crowdsourcing Concept Art: An Art Style Classifier to Maintain Consistent Artistic Vision at Scale
Michael Bernstein (Computer Science), Camille Utterback (Art and Art History)
A key feature of major artistic feats like live-action movies and animated films is consistent artistic vision and style coordinated across a large group of artistic contributors. This is achieved through experienced art directors who define and communicate an artistic direction to teams of visual artists. In some cases, the team produces a style bible, which specifies guidelines that artists must follow to ensure adherence to the main artistic direction. However, most efforts lack singular vision and style due to the modularized division of labor in a distributed volunteer creative team and the subjectivity of artistic expression in open-ended design problems like animated filmmaking. We aim to develop a socio-technical system that helps distributed teams of artists define and preserve creative vision and style throughout an animation project. Artists can submit their work to our system for feedback on how far it has deviated. If the artwork has deviated, our system suggests edits to help adhere to the style guide.
Modeling Finger Sense Training and Math Learning in Children
Allison Okamura (Mechanical Engineering), Jo Boaler (Education), Dorsa Sadigh (Computer Science, Electrical Engineering), Melisa Orta Martinez (Mechanical Engineering), Julie Walker (Mechanical Engineering), Margaret Koehler (Mechanical Engineering)
Finger sense training has been proven to increase young children’s mathematical abilities, yet no study has examined and modeled the co-development of finger perception and mathematics learning. We have developed a haptic device designed specifically to develop mathematical understanding through finger perception in young children (aged 5-7) and plan to use it to gain insight as to how finger perception aids mathematical understanding. During this study, half of the students will perform mathematics tasks using our device and half will perform the same tasks using a computer. We plan to apply modeling and reinforcement learning to compare the learning progress of students with and without access to the haptic devices. We propose to add student-specific parameters and recordings of the interactions with the device to give us better insight into how a student’s mathematical ability develops as a function of the interaction with our device and their finger sense.
PopBots: An Army of Conversational Agents for Daily Stress Management
Dan Jurafsky (Computer Science, Linguistics), Pablo E. Paredes (Radiology, Psychiatry and Behavioral Sciences)
Stress management is a growing need in our society. The top two reasons given by people for this lack of stress management are lack of willpower and lack of time [2]. Addressing this problem requires solutions that people can use in everyday environments, such as conversational agents using messaging apps or voice-based agents [6]. However, current chatbot systems lack adoption for two reasons: people have no time to commit to long interactions, and current technology sounds unnatural in long conversations. We propose the creation of micro chatbots, inspired in prior research stress micro-interventions[10]. Our pilot work has created bots based on known coping strategies [10,12] such as problem-solving (Sherlock bot), positive thinking (Glass-Half-Full bot), humor (Sir-Laughs-A-Bot), worst case scenario (Doom bot), and distraction (Distracti-bot). Our research will focus on text-based solution for mobile phones and voice-based solutions geared towards in-car commuting, and will address how to parse people's stressors, when to respond and how to create rapport, and how to develop and select appropriate chatbots.
Opportunistic Screening for Coronary Artery Disease Using Artificial Intelligence
David Maron (Cardiovascular Medicine), Alex Sandhu (Cardiovascular Medicine), Fatima Rodriguez (Cardiovascular Medicine), Bhavik Patel (Radiology), Matthew Lungren (Radiology), Curtis Langlotz (Radiology), Christopher Chute (Computer Science), Pranav Rajpurkar (Computer Science)
Coronary artery disease is the #1 cause of death. It develops silently, but is detectable long before symptoms develop with an electrocardiogram (ECG)-gated CT that can quantitate the amount of coronary artery calcification (CAC). When the CAC score is >100 Agatston units, guidelines recommend statin therapy to reduce risk of heart attack and death. However, CAC testing is underutilized, because insurers do not pay for these tests. However, 19 million ungated chest CT scans are performed annually in the US for non-cardiac reasons. Radiologists may or may not assess and report the presence of CAC. CAC cannot be scored objectively on these scans because current technology requires ECG-gating to calculate a score. The aim of our research is to develop a deep learning algorithm to identify and quantify CAC on non-gated chest CTs, and, in a randomized controlled trial, demonstrate that statin therapy is increased when patients and physicians are alerted to the presence of incidental CAC equivalent to >100 Agatston units by an opportunistic screening model.
Promoting Well-Being by Predicting Behavioral Vulnerability in Real-Time
Jeffrey Hancock (Communication), Gabriella Harani (Communication), Jure Leskovec (Computer Science), Adam Miner (Psychiatry and Behavioral Sciences), Róbert Pálovics (Computer Science), Katie Roehrick (Communication)
The overarching goal of the Deep Social Environment Sampling (deepSens) project is to predict behavioral vulnerability in real-time in order to deliver timely interventions that promote wellbeing. To achieve this aim, we seek to explore how smartphone sensor data and in-screen content can be used to develop real-time monitoring of behavior and mood. These predictive models can then be leveraged to deliver personalized interventions that address moments of behavioral vulnerability—such as when individuals become physically or socially isolated—and thereby support individuals in managing their daily wellbeing. This project will support the groundwork needed to: (1) detect in real-time the onset of problematic behavior patterns as captured by each individual’s digital record, and (2) deliver personalized feedback to individuals about the relationship between their mood and sensed behavior.
Using Artificial Intelligence to Optimize Patient Mobility and Functional Outcomes
Arnold Milstein (Medicine), Fei-Fei Li (Computer Science), Francesca Rinaldo Salipur (CERC, General Surgery), Bingbin Liu (Computer Science)
In spite of advances in critical care, survivors of prolonged, high-intensity care frequently suffer from post-intensive care syndrome (PICS). Current practices for monitoring patient mobility- including direct human observation and mining of the electronic health record (EHR)- are time and labor intensive, prone to inaccurate documentation, and involve a notable time lag between patient care and reporting. We have previously demonstrated the feasibility of using computer vision technology (CVT) as an alternative approach by passively capturing data from the clinical environment, using machine-learning algorithms to detect and quantify patient and staff activities automatically. Our current research aims to further develop and validate these algorithms for detection of a broader spectrum of mobility activities, creating a real-time data collection system that is hospital-deployable. Furthermore, using mobility patterns as measured by our CVT system, as well as patient demographic and clinical data as features, a novel machine-learning model will be developed to determine how mobility protocols may be designed to optimize functional status at ICU discharge.
Predicting Malaria Outbreaks: AI to Learn, Classify and Predict Across Diverse Paleo-demographic, Climatic and Genomic Data
Krish Seetah (Anthropology), Robert Dunbar (Earth System Science), Carlos Bustamante (Biomedical Data Science, Genetics), Giulio De Leo (Biology), Erin Mordecai (Biology), Michelle Barry (CIGH), Bright Zhou (Medicine), David Pickel (Classics), Hannah Moots (Anthropology)
This project seeks to predict the impact of malaria for the next 50-100 years. The project incorporates AI tools to recognize patterns in transmission over time. We will access vast, data-rich evidence on climate, land use, and human behavior from historic epidemics, alongside genetic evidence on human demography, and vector and parasite biology. Malaria threatens 3.5 billion people in ~97 countries. 90% of those affected live in Africa and children under five suffer the highest morbidity. Despite billions in funding eradication efforts, prevalence is increasing. The problem is exacerbated because of resistance to pesticides and drugs, and the lack of a proven vaccine. We lack the tools to predict transmission as a function of climatic, land-use and demographic factors. How do multiple factors interact to exacerbate or mitigate outbreaks? Our models could guide disease prediction, providing evidence to help adjust policy for targeted intervention.
Robust Deep Neural Network Optimization with Second Order Method for Biomedical Applications
Stephen Boyd (Electrical Engineering), Mohsen Bayati (Business), Lei Xing (Radiation Oncology), Varun Vasudevan (ICME), Kate Horst (Radiation Oncology)
Development of robust optimization tools is a significant step in applying deep neural networks to biomedical data, particularly 3D or even 4D images, due to high dimensional training data, complex network structures and non-convex nature of the optimization landscape. Currently, first-order optimization methods such as stochastic gradient descent are the method of choice in deep learning, despite their well-known deficiencies in convergence, speed and scale of data handling. The demand for substantially improved neural network optimization techniques is rapidly increasing with the rapid evolution and adoption of new neural network architectures such as graph neural networks (GNNs). In this project, we will investigate second-order methods for optimization of GNNs and convolutional neural networks (CNNs) and demonstrate their potential impact on a series of biomedical applications such as disease detection, diagnosis, treatment planning and monitoring.
The Economic Consequences of Artificial Intelligence
Nick Bloom (Economics), Matt Gentzkow (Economics), Pete Klenow (Economics), Caroline Hoxby (H&S, Hoover Institution, SIEPR), Tim Bresnahan (Economics), Michael Webb (Economics)
Recent breakthroughs in artificial intelligence have led to widespread anxiety about how the technology will impact the labor market. Prior research on this question has relied on expert forecasts. While valuable, these forecasts are impossible to validate, and cannot be ‘unpacked’ to test the sensitivity of specific assumptions. In this project, we will build a model that maps specific changes in technology into changes in labor demand. We will train and validate the model using historical episodes of technological change. Finally, we will project the model forwards to predict the impacts of AI.
Uncovering gender inequalities in East Africa: Using AI to gain insights from media data
Gary Darmstadt (Pediatrics-Neonatology), James Zou (Biomedical Data Science), Londa Schiebinger (History), Ann Weber (Pediatrics), Valerie Meausoone (Population Health Sciences)
Gender inequality intersects with discrimination by race, age, social class in ways that affect the health and wellbeing of people around the world. Analysis of media data from the U.S. has revealed under-recognized gender, racial and ethnic-related biases in the public sphere, demonstrating how language can reflect social, political and institutional environments. We aim to take advantage of Artificial Intelligence (AI) methods, such as natural language processing combined with machine learning, to gain insights into the ways different gender groups are perceived in East-African media. Specifically, we aim to start a database of word embeddings for gendered terms trained on publicly available media data, focusing on three former British colonies: Kenya, Uganda and Tanzania. We hypothesize that quantified gender biases will show country-specific differences in gender stereotypes that codify attitudes and behaviors. We expect that newly created corpora and word-embedding databases will serve as a resource to spur innovative partnerships with local East-African researchers and can be extended, in the future, to non-English languages prevalent in the region.
Developing Artificial Intelligence Tools for Dynamic Cancer Treatment Strategies
Daniel Rubin (Radiology, Biomedical Data Science), Michael Gensheimer (Radiation Oncology), Susan Athey (Business), Ross Shachter (Management Science and Engineering), Jiaming Zeng (Management Science and Engineering)
With the rising number and complexity of cancer therapies, it is increasingly difficult for clinicians to identity an optimal combination of treatments for a patient. Our research aims to provide a decision support tool to optimize and supplant cancer treatment decisions. Leveraging machine learning, causal inference, and decision analysis, we will utilize electronic medical records to develop dynamic cancer treatment strategies that advice clinicians and patients based on patient characteristics, medical history, and etc. The research hopes to bridge the understanding between causal inference and decision analysis and ultimately develops an artificial intelligence tool that improves clinical outcomes over current practices.
Want out? Removing Individuals’ Data from Machine Learning Models
James Zou (Biomedical Data Science), Greg Valiant (Computer Science), Amy Motomura (Law)
Data collected from individuals is the fuel that drives modern AI. In this project we initiate a framework to study what to do when it is no longer permissible to deploy models derivative from certain individual user data. In particular, we formulate the problem of how to efficiently delete individual data from ML models that have been trained using this data. For many standard ML models, the only way to completely remove a person’s data is to retrain the whole model from scratch on the remaining data. This is not feasible in many settings and is a fundamental impediment toward implementing Right To Be Forgotten type policies in practice. We will develop both the theory and practice of efficient data deletion from ML models.
⭐⭐⭐ Rating: 3.4 - 18 votes
Comments