Systems, methods, and apparatus for diagnostic inferencing with a multimodal deep memory network

    公开(公告)号:US11621075B2

    公开(公告)日:2023-04-04

    申请号:US16330174

    申请日:2017-09-05

    Abstract: The described embodiments relate to systems, methods, and apparatus for providing a multimodal deep memory network (200) capable of generating patient diagnoses (222). The multimodal deep memory network can employ different neural networks, such as a recurrent neural network and a convolution neural network, for creating embeddings (204, 214, 216) from medical images (212) and electronic health records (206). Connections between the input embeddings (204) and diagnoses embeddings (222) can be based on an amount of attention that was given to the images and electronic health records when creating a particular diagnosis. For instance, the amount of attention can be characterized by data (110) that is generated based on sensors that monitor eye movements of clinicians observing the medical images and electronic health records. Resulting patient diagnoses can be provided according to a predetermined classification of weights, or a compilation of words that are generated over multiple iterations of the multimodal deep memory network.

    Semi-supervised classification with stacked autoencoder

    公开(公告)号:US11544529B2

    公开(公告)日:2023-01-03

    申请号:US16329959

    申请日:2017-09-04

    Abstract: Techniques described herein relate to semi-supervised training and application of stacked autoencoders and other classifiers for predictive and other purposes. In various embodiments, a semi-supervised model (108) may be trained for sentence classification, and may combine what is referred to herein as a “residual stacked de-noising autoencoder” (“RSDA”) (220), which may be unsupervised, with a supervised classifier (218) such as a classification neural network (e.g., a multilayer perceptron, or “MLP”). In various embodiments, the RSDA may be a stacked denoising autoencoder that may or may not include one or more residual connections. If present, the residual connections may help the RSDA “remember” forgotten information across multiple layers. In various embodiments, the semi-supervised model may be trained with unlabeled data (for the RSDA) and labeled data (for the classifier) simultaneously.

Patent Agency Ranking