Invention Grant
- Patent Title: Protecting cognitive systems from model stealing attacks
-
Application No.: US15714514Application Date: 2017-09-25
-
Publication No.: US11023593B2Publication Date: 2021-06-01
- Inventor: Taesung Lee , Ian M. Molloy , Dong Su
- Applicant: International Business Machines Corporation
- Applicant Address: US NY Armonk
- Assignee: International Business Machines Corporation
- Current Assignee: International Business Machines Corporation
- Current Assignee Address: US NY Armonk
- Agent Stephen J. Walder, Jr.; Jeffrey S. LaBaw
- Main IPC: G06N3/04
- IPC: G06N3/04 ; G06N3/08 ; G06F21/60 ; G06Q10/06

Abstract:
Mechanisms are provided for obfuscating training of trained cognitive model logic. The mechanisms receive input data for classification into one or more classes in a plurality of predefined classes as part of a cognitive operation of the cognitive system. The input data is processed by applying a trained cognitive model to the input data to generate an output vector having values for each of the plurality of predefined classes. A perturbation insertion engine modifies the output vector by inserting a perturbation in a function associated with generating the output vector, to thereby generate a modified output vector. The modified output vector is then output. The perturbation modifies the one or more values to obfuscate the trained configuration of the trained cognitive model logic while maintaining accuracy of classification of the input data.
Public/Granted literature
- US20190095629A1 Protecting Cognitive Systems from Model Stealing Attacks Public/Granted day:2019-03-28
Information query