Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
A
adversarial-hatespeech
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Container Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
mai
adversarial-hatespeech
Graph
c995a9acf86c083a726a9f93e5187e72b654d74b
Select Git revision
Branches
1
master
default
protected
1 result
You can move around the graph by using the arrow keys.
Begin with the selected commit
Created with Raphaël 2.2.0
8
Aug
31
Mar
30
28
27
26
25
24
23
19
17
16
11
10
5
Add project report
master
master
Add remark on LIME in README
add educated guesses with lime
Add explanations to data files
Update instructions
Correct batchscripts, add instructions
Add average abusive score to testing
train set finished attacking
Add naive attacks on test set
rename evaluate to predict, add naive attacks with substitution_dict
Add substitutions_val_no-letters.json
Attack val set
Add functionality to attack val set
Add substitution counting to analyze.py
Debug analyze.py
Add script for analyzing stats
Explain no-letters
Add stats to attacks file
Add successrate calculation
Add LIME explanation script
Full attack run through test set
Reformat dataset.json for better searchability
[debug] saving and fast-forwarding is now working
Add dataset fast-forwarding
yes
Debug, only attack abusive examples
Add attack function
Debug eval script
Move evaluation to utils/eval.py
New script for attacking
Add ability to load the dataset
Add test script
Initial commit
Loading