Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

reveal the vulnerabilities of SplitNN

License

NotificationsYou must be signed in to change notification settings

Koukyosyumei/Attack_SplitNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Attacking_SplitNN allows you to easily experiment with various combinations of attack and defense algorithms against SplitNN within PyTorch and scikit-learn.

Install

    pip install git+https://github.com/Koukyosyumei/Attack_SplitNN

SplitNN

You can easily create two-SplitNN with this package as follows.
The client only has input data, and the server has only labels.This package implements SplitNN as the custom torch.nn.modules, so youcan train SplitNN like the normal torch models.

    Examples:            model_1 = FirstNet()            model_1 = model_1.to(device)            model_2 = SecondNet()            model_2 = model_2.to(device)            opt_1 = optim.Adam(model_1.parameters(), lr=1e-3)            opt_2 = optim.Adam(model_2.parameters(), lr=1e-3)            criterion = nn.BCELoss()            client = Client(model_1)            server = Server(model_2)            splitnn = SplitNN(client, server, opt_1, opt_2)            splitnn.train()            for epoch in range(3):            epoch_loss = 0            epoch_outputs = []            epoch_labels = []            for i, data in enumerate(train_loader):                    splitnn.zero_grads()                    inputs, labels = data                    inputs = inputs.to(device)                    labels = labels.to(device)                    outputs = splitnn(inputs)                    loss = criterion(outputs, labels)                    loss.backward()                    epoch_loss += loss.item() / len(train_loader.dataset)                    epoch_outputs.append(outputs)                    epoch_labels.append(labels)                    splitnn.backward()                    splitnn.step()            print(epoch_loss, torch_auc(torch.cat(epoch_labels),                                            torch.cat(epoch_outputs)))

Attack

Attacking_SplitNN offers several attack methods with the same interface.

typeexampleReference
Intermidiate Level Attackevasion attacknotebookoriginal paper
Norm Attacklabel leakage attacknotebookoriginal paper
Transfer Inherit Attackmembership inference attacknotebookoriginal paper
Black Box Model Inversion Attackmodel inversion attacknotebookblog

Defense

exampleReference
Max Normnotebookoriginal paper
NoPeeknotebookoriginal paper
Shreddernotebookoriginal paper

License

This software is released under the MIT License, see LICENSE.txt.


[8]ページ先頭

©2009-2025 Movatter.jp