Machine learning avatar for consolidating and presenting data in virtual environments
Abstract:
Processes, systems, and devices generate a training set comprising a first presentation having a first visual aid and a first audio description. The first visual aid and the first audio description are based on initial data retrieved from a first data source using a first indexing technique. The machine-learning system is trained using the first presentation and the initial data retrieved from the first data source using the first indexing technique. The machine-learning system generates a second presentation having a second visual aid and a second audio description. The second visual aid and the second audio description are based on refreshed data retrieved from the first data source using the first indexing technique. The machine-learning system presents the second presentation via an avatar in a virtual meeting room. The avatar is generated by the machine-learning system to present the second visual aid and the second audio description.
Information query
Patent Agency Ranking
0/0