Blog

  • PNSMN

    PNSMN – Pose ta Node Sur Mon Network

    /!\ This version might be deprecated.. A new UI is being developed. New features include:

    -Network topology -Range scanning -iface selection -attack plugins integration and stuff

    Introduction

    This is a quick project I made to learn NodeJS, therefore, some part of the code might be a bit sh**ty, any tips and advices are welcomed 🙂 ! PNSMN is a NodeJS based live JavaScript injection tool. Able to discover clients on the network, PNSMN allows to inject a hooking script into any machine found on the network. This script will then allow attacks to be triggered in real time from the UI to the client such as live text modification, live input scanning (keylogger), live redirection and a lot more.

    Installing PNMSN

    PNSMN was originally designed to be installed on raspberry pi running kali linux in order to be used as a mobile “penetration testing ;)” platform but can of course be installed on any debian os.

    Prerequisite

    -NodeJS V6 or higher
    -nmap (network scanning tool)
    -MITMf (powerful man in the middle tool)
    -Dickbutt (illustration of an anthropomorphic phallus with a pair of testicles and a penis protruding from its backside)

    Step – 1

    Clone the git repository in the folder of yout choice

    cd <path to install>
    git clone https://github.com/MIDMX/PNSMN
    

    Step – 2

    Install node packages from “packages.json” file

    cd <installation path>/PNSMN_UI
    npm install
    

    No error ? Installation was successfull 🙂
    An error ? Please comment below and I will do my best to solve it for ya mate !

    Utilisation

    Logging In

    Start the node server form your terminal:

    cd <installation path>/PNSMN_UI
    node index.js
    

    The server is now up and running. Open your favorite browser and type in “localhost:5001” in your address bar. A PNSMN login form should be displayed on your screen.
    username: PNSMN
    password: PNSMN

    Alt text

    Using the UI

    Once logged in, the UI will start a network scan (THis might take up to a minute). At the end of the scan, found clients are displayed on the screen. In order to hook one of these clients, click on the “Hook” button. The client’s Icon should be replaced with a loading animation and information about the client’s web usage should be displayed in the terminal underneath. Once a hook is successfully injected, the “attack” section will be displayed and th client icon will be replaced with a “link” icon, informing that the client is currently hooked. As a last step, choose an attack in the “attack” section and it will be instantaneously sent to the hooked client. Have fun pen-testing your “OWN NETWORK ;)” (not too much though)

    Alt text

    Visit original content creator repository https://github.com/timekadel/PNSMN
  • healthmate-finder

    💪 healthmate finder ‘Helparty’

    “나의 헬스 친구 Helparty”

    운동을 하는 사람들은 모두 헬스장을 가야겠다는 생각을 하지만 발이 떨어지지 않는 경험을 한 적이 있을 겁니다. 그리고 이럴 때 나랑 같이 다닐 친구가 있다면 좋을 텐데라고 혼자 생각하고 말았을 겁니다. 그래서 만들었습니다. 헬스 같이 할 동네 친구 매칭 시켜주는 서비스 ‘Helparty’ 입니다!

    프로젝트의 전체적인 구조

    프로젝트 구조도 (2)

    DB ERD

    helparty erd

    프로젝트 목표

    • 성능을 생각하면서 효율적인 코드를 작성하도록 노력하였습니다.
    • 객체지향 원칙을 따르며 확장성 있는 코드를 작성하고자 하였습니다.
    • 대용량 트래픽을 감당할 수 있는 인프라를 구축하고 안정적인 서비스를 만들고자 노력했습니다.
    • 협업을 한다는 가정하에 다른 사람들이 쉽게 알아볼 수 있도록 코드 작성하는 것에 유의하였습니다.
    • 고립된 테스트 코드 작성으로 다른 코드에 의존적이지 않은 테스트를 진행하였습니다.

    사용 기술

    1. Java 11
    2. Spring Boot
    3. JUnit
    4. MySQL
    5. MyBatis
    6. Redis
    7. Jenkins
    8. Naver Cloud

    프로젝트 중점 사항

    • GitFlow를 이용한 병렬적 개발 방식
    • 코드의 목적을 쉽게 알 수 있는 메서드 네이밍
    • 객체지향 코드 작성법으로 확장성 유지
    • 작성된 Layer에 고립시켜 의존적이지 않은 단위 테스트 작성
    • 반복되는 로직을 핵심 로직으로부터 분리 (feat. AOP, ArgumentHandlerResolver, Interceptor)
    • 젠킨스를 사용한 CI/CD 환경 구축
    • 하나의 클라우드 서버에 하나의 어플리케이션을 사용하여 높은 확장성 유지
    • 많은 사람들에 의해 중복될 페이지 조회에 Redis cache를 사용하여 성능 개선
    • Redis의 Session Server를 사용하여 Session 정합성 유지
    • Log4J2를 로그로 사용하여 서버의 부담 최소화
    • NginX의 Reverse-Proxy를 이용한 로드밸런싱 구현
    • DB Replication을 구현하여 DB 성능 향상
    • MySql쿼리의 실행계획 분석 후 쿼리튜닝을 통한 성능 향상

    이슈 해결 과정

    화면설계

    kakao oven –https://ovenapp.io/view/ZmMg4lnHw2iVSxfO0UwY1NzTOkWoNsiZ/liSyR

    Helparty 프로토타입

    기능 정의

    깃헙 플로우

    Visit original content creator repository https://github.com/f-lab-edu/healthmate-finder
  • healthmate-finder

    💪 healthmate finder ‘Helparty’

    “나의 헬스 친구 Helparty”

    운동을 하는 사람들은 모두 헬스장을 가야겠다는 생각을 하지만 발이 떨어지지 않는 경험을 한 적이 있을 겁니다. 그리고 이럴 때 나랑 같이 다닐 친구가 있다면 좋을 텐데라고 혼자 생각하고 말았을 겁니다. 그래서 만들었습니다. 헬스 같이 할 동네 친구 매칭 시켜주는 서비스 ‘Helparty’ 입니다!

    프로젝트의 전체적인 구조

    프로젝트 구조도 (2)

    DB ERD

    helparty erd

    프로젝트 목표

    • 성능을 생각하면서 효율적인 코드를 작성하도록 노력하였습니다.
    • 객체지향 원칙을 따르며 확장성 있는 코드를 작성하고자 하였습니다.
    • 대용량 트래픽을 감당할 수 있는 인프라를 구축하고 안정적인 서비스를 만들고자 노력했습니다.
    • 협업을 한다는 가정하에 다른 사람들이 쉽게 알아볼 수 있도록 코드 작성하는 것에 유의하였습니다.
    • 고립된 테스트 코드 작성으로 다른 코드에 의존적이지 않은 테스트를 진행하였습니다.

    사용 기술

    1. Java 11
    2. Spring Boot
    3. JUnit
    4. MySQL
    5. MyBatis
    6. Redis
    7. Jenkins
    8. Naver Cloud

    프로젝트 중점 사항

    • GitFlow를 이용한 병렬적 개발 방식
    • 코드의 목적을 쉽게 알 수 있는 메서드 네이밍
    • 객체지향 코드 작성법으로 확장성 유지
    • 작성된 Layer에 고립시켜 의존적이지 않은 단위 테스트 작성
    • 반복되는 로직을 핵심 로직으로부터 분리 (feat. AOP, ArgumentHandlerResolver, Interceptor)
    • 젠킨스를 사용한 CI/CD 환경 구축
    • 하나의 클라우드 서버에 하나의 어플리케이션을 사용하여 높은 확장성 유지
    • 많은 사람들에 의해 중복될 페이지 조회에 Redis cache를 사용하여 성능 개선
    • Redis의 Session Server를 사용하여 Session 정합성 유지
    • Log4J2를 로그로 사용하여 서버의 부담 최소화
    • NginX의 Reverse-Proxy를 이용한 로드밸런싱 구현
    • DB Replication을 구현하여 DB 성능 향상
    • MySql쿼리의 실행계획 분석 후 쿼리튜닝을 통한 성능 향상

    이슈 해결 과정

    화면설계

    kakao oven –https://ovenapp.io/view/ZmMg4lnHw2iVSxfO0UwY1NzTOkWoNsiZ/liSyR

    Helparty 프로토타입

    기능 정의

    깃헙 플로우

    Visit original content creator repository https://github.com/f-lab-edu/healthmate-finder
  • LMSstat

    LMSstat

    Automation of statistical test with an identical data input aiming to reduce arduous work searching for packages and changing data input.

    The package includes

    • Simple Statistics :u-test, t-test, post hocs of Anova and Kruskal Wallis with FDR adjusted values

    • Bar, Box, Dot, Violin plots with significance (u-test, t-test, post hocs of Anova and Kruskal Wallis)

    • Scaling & Transformation

    • Normality check (Shapiro Wilk test)

    • Scheirer–Ray–Hare Test

    • Volcano plot

    • Heatmap

    • PERMANOVA

    • NMDS

    • PCA

    • PCoA

    Contribution acknowledgement

    Oct.01/2021 Daehwan Kim

    • Allstats_new optimization for faster processing

    • bug fix of Allstats (regarding LETTERS210729)

    Instructions

    Installation

    Download R

    https://cran.r-project.org/bin/windows/base/

    Download R Studio

    https://www.rstudio.com/products/rstudio/download/

    Download Rtools

    https://cran.r-project.org/bin/windows/Rtools/

    Download package in R

    install.packages("devtools")
    
    devtools::install_github("CHKim5/LMSstat")
    
    library(LMSstat)
    

    Basic structure of the Data

    Used in

    • Simple statistics
    • Barplot, Boxplot, Dotplot
    • Volcano plot
    • Scheirer–Ray–Hare Test
    • PERMANOVA
    • NMDS
    • PCA
    • Scaling & Transformation
    • Normality check (Shapiro Wilk test)
    • Heatmap
    #Sample Data provided within the package
    
    data("Data")
    
    # Uploading your own Data
    
    setwd("C:/Users/82102/Desktop")
    
    Data<-read.csv("statT.csv",header = F)
    

    The column “Multilevel” is mandatory for the code to run flawlessly.

    If Multilevel is not used, fill the column with random characters

    Datafile needs to follow the following format

    Care for Capitals: Sample, Multilevel, Group

    statT.csv

    Used in

    • PERMANOVA
    #Sample Data provided within the package
    data("Classification")
    
    # Uploading your own Data
    Classification<-read.csv("statT_G.csv",header = F)
    

    statT_G.csv

    Univariate statistics

    Statfile<-Allstats_new(Data,Adjust_p_value = T, Adjust_method = "BH") # Optimized code using lapply / data.table for faster processing contributed by Daehwan Kim
    
    Statfile<-Allstats(Data,Adjust_p_value = T, Adjust_method = "BH") # Previous version using for-loop
    
    Adjustable parameters
    • Adjust_p_value = T # Set True if adjustment is needed

    • Adjust_method = F # Adjustment methods frequently used. c(“holm”, “hochberg”, “hommel”, “bonferroni”, “BH”, “BY”,”fdr”, “none”)

    head(Statfile[["Result"]]) # includes all statistical results
    
    write.csv(Statfile[["Result"]],"p_value_result.csv")  # Write csv with all the p-value included
    

    Plots

    # Makes a subdirectory and saves box plots for all the variables
    AS_boxplot(Statfile,asterisk = "u_test") 
    
    # Makes a subdirectory and saves dot plots for all the variables
    AS_dotplot(Statfile,asterisk = "t_test") 
    
    # Makes a subdirectory and saves bar plots for all the variables
    AS_barplot(Statfile,asterisk = "Scheffe")
    
    # Makes a subdirectory and saves violin plots for all the variables
    AS_violinplot(Statfile,asterisk = "Scheffe")
    

              AS_boxplot(Statfile)              AS_dotplot(Statfile)

              AS_barplot(Statfile)              AS_violinplot(Statfile)

    Adjustable parameters
    • asterisk = “t_test” #c(“Dunn”,”Scheffe”,”u_test”,”t_test”)
    • significant_variable_only = F # If set to TRUE, insignificant results will not be plotted
    • color = c(“#FF3300”, “#FF6600”, “#FFCC00”, “#99CC00”, “#0066CC”, “#660099”) # Colors for the plots
    • legend_position = “none” # “none”,”left”,”right”,”bottom”,”top”
    • order = NULL # Order of the groups c(“LAC”,”LUE”,”WEI”,”SDF”,”HGH”,”ASH”)
    • tip_length = 0.01 # significance tip length
    • label_size = 2.88 # significance label size
    • step_increase = 0.05 #significance step increase
    • width = 0.3 # box width ; size = 3 # dot size
    • fig_width = NA #figure size
    • fig_height = NA #figure size
    • Y_text = 12 # Y title size
    • X_text = 10 # X text size
    • Y_lab = 10 #y axis text size
    • T_size = 15 # Title size
    • sig_int = c(0.1,0.05) # significance interval

    Scaling & Transformation

    scaled_data<-D_tran(Data,param = "Auto")
    

               Raw_Data                     Scaled_Data

    Adjustable parameters
    • param = “None” # “None”,”Auto”,”log10″,”Pareto”

    • save = F #Set true if datafile is to be saved

    Normality check

    #Shapiro Wilk test
    
    Result<-Norm_test(Data)
    
    write.csv(Result,"Normality_test_Result.csv")
    

    Scheirer–Ray–Hare Test

    # csv files including significant variables (Multilevel, Group, interaction) and a Venn diagram are downloaded
    SRH(Data)
    

    Adjustable parameters
    • Adjust_p_value = T # Set True if adjustment is needed
    • Adjust_method = “BH” # Adjustment methods frequently used. c(“holm”, “hochberg”, “hommel”, “bonferroni”, “BH”, “BY”,”fdr”, “none”)

    Volcano plot

    # Makes a subdirectory and saves Volcano plots for different combination of groups
    Test<-Allstats(Data)
    Volcano(Test,asterisk = "t-test")
    

    Adjustable parameters
    • asterisk = “t-test” #statistics inheriting from Allstats “Scheffe”, “t-test”, “u-test”, “Dunn”
    • reverse = T # T, F reverse the direction of fold change
    • fig_width = NA #figure size
    • fig_height = NA #figure size
    • FC_log = 2 # Fold change log transformation value
    • pval_log = 10 #p_value log transformation value
    • dotsize = 3 #dotsize
    • x_limit = c(-2,2) #x axis limt
    • y_limit =c(0,6) #y axis limit
    • pval_intercept = 0.05 # intercept for identification
    • sig_label = T # T,F label significant variables
    • color=c(“#FF3300″,”#FF6600″,”#FFCC00”) #colors used for ggplots.
    • fixed_limit = F #whether the limit should be fixed or not T, F
    • max_overlap = 20 #maximum overlap for labels
    • FC_range = c(-1.5,1.5) #significant fold change range

    Heatmap

    # Makes a subdirectory and saves Heatmap
    
    scaled_data<-D_tran(Data,param = "Auto")
    
    AS_heatmap(scaled_data) #data inheriting from D_tran
    
    dev.off() # Saved as PDF
    

    Adjustable parameters
    • col =c(“green”, “white”, “red”) # colors for heatmap
    • col_lim = c(-3, 0, 3) # color boundaries
    • reverse = T # T,F Reverse column and rows
    • distance = “pearson” # Distance matrix for HCA “pearson”, “manhattan”,”euclidean”,”spearman”,”kendall” ,
    • rownames = T # T,F
    • colnames = T # T,F
    • Hsize = (3,6) # Width & Height c(a,b)
    • g_legend = “Group” # Annotation legend title
    • h_legend = “Color Key” # Heatmap legend title
    • Title =”Title” # Title
    • T_size = 10 # Title text size
    • R_size = 3 # row text size
    • C_size = 3 # column text size
    • Gcol =c(“ASD” = “black”,”HGH”=”red”,”LAC”=”blue”,”LUE” =”grey”,”SDF” = “yellow”,”WEI”=”green”) # Color for top_annotation bar
    • dend_h = 0.5 #dendrite height
    • a_h = 0.2 # top annotation hegiht

    Multivariate statistics

    PERMANOVA

    data("Data")
    
    data("Classification") 
    

    Single factor

    PERMANOVA done with the Group column

    Indiv_Perm(Data) # The group information is treated as a factor
    

    Multiple Factors

    Loops PERMANOVA over different classes provided by Classification

    Result<-Multi_Perm(Data,Classification) # The group information is treated as factors
    

    Adjustable parameters
    • method = Dissimilarity index c(“manhattan”, “euclidean”, “canberra”, “clark”, “bray”, “kulczynski”, “jaccard”, “gower”, “altGower”, “morisita”, “horn”, “mountford”, “raup”, “binomial”, “chao”, “cao”, “mahalanobis”, “chisq”,chord”)

    NMDS

    # Makes a subdirectory and saves NMDS plots for all of the distance metrics
    NMDS(Data,methods = c("manhattan","bray","euclidean"))
    

    NMDS plot with bray distance and p-value from PERMANOVA

    Adjustable parameters
    • methods = Dissimilarity index c(“manhattan”, “euclidean”, “canberra”, “clark”, “bray”, “kulczynski”, “jaccard”, “gower”, “altGower”, “morisita”, “horn”, “mountford”, “raup”, “binomial”, “chao”, “cao”, “mahalanobis”, “chisq”,chord”)

    • color = c(“#FF3300”, “#FF6600”, “#FFCC00”, “#99CC00”, “#0066CC”, “#660099”) # Colors for the plots

    • legend_position = “none” # “none”,”left”,”right”,”bottom”,”top”

    • fig_width = NA #figure size

    • fig_height = NA #figure size

    • names = F # used to indicate sample names

    • dotsize = 3 # dotsize

    • labsize = 3 # label size

    PCA

    # Makes a subdirectory and saves PCA plot
    PCA(Data,components = c(1,2),legend_position = "none"))
    

    PCA plot with selected components

    Adjustable parameters
    • color = c(“#FF3300”, “#FF6600”, “#FFCC00”, “#99CC00”, “#0066CC”, “#660099”) # Colors for the plots
    • legend_position = “none” # “none”,”left”,”right”,”bottom”,”top”
    • fig_width = NA #figure size
    • fig_height = NA #figure size
    • components = c(1,2) # selected components
    • names = F # used to indicate sample names
    • dotsize = 3 # dotsize
    • labsize = 3 # label size
    • ellipse = T # T or F to show ellipse

    PCoA

    # Makes a subdirectory and saves PCoA plot
    PCoA(Data,components = c(1,2),methods = c("bray", "manhattan"))
    

    PCoA plot with selected components

    Adjustable parameters
    • color = c(“#FF3300”, “#FF6600”, “#FFCC00”, “#99CC00”, “#0066CC”, “#660099”) # Colors for the plots
    • legend_position = “none” # “none”,”left”,”right”,”bottom”,”top”
    • fig_width = NA #figure size
    • fig_height = NA #figure size
    • components = c(1,2) # selected components
    • names = F # used to indicate sample names
    • dotsize = 3 # dotsize
    • labsize = 3 # label size
    • ellipse = T # T or F to show ellipse
    • methods = Dissimilarity index c(“manhattan”, “euclidean”, “canberra”, “clark”, “bray”, “kulczynski”, “jaccard”, “gower”, “altGower”, “morisita”, “horn”, “mountford”, “raup”, “binomial”, “chao”, “cao”, “mahalanobis”, “chisq”,chord”)
    Visit original content creator repository https://github.com/CHKim5/LMSstat
  • rentx-api

    Cadastro de carro

    RF

    • Deve ser possível cadastrar um novo carro.
    • Deve ser possível listar todas as categorias de carros.

    RN

    • Não deve ser possível cadastrar um carro com uma placa já existente.
    • Não deve ser possível alterar a placa de um carro já cadastrado.
    • O carro deve ser cadastrado com a disponibilidade, por padrão.
    • O usuário responsável pelo cadastro deve ser um usuário admin.

    Listagem de carros

    RF

    • Deve ser possível listar todos os carros disponíveis.
    • Deve ser possível listar um modelo de carro pelo nome da categoria.
    • Deve ser possível listar um modelo de carro pelo nome da marca.
    • Deve ser possível listar um modelo de carro pelo nome.

    RN

    • Não deve ser necessário estar logado no sistema para listar os carros.

    Cadastro de especificação no carro

    RF

    • Deve ser possível cadastrar uma especificação para um carro.
    • Deve ser possível listar as especificações de um carro.
    • Deve ser possível listar todos os carros

    RN

    • Não deve ser cadastrar uma especificação para um carro não cadastrado.
    • Não deve ser cadastrar uma especificação já existente para o mesmo carro.
    • Não deve ser necessário estar logado no sistema para listar os carros.

    Cadastro de imagens do carro

    RF

    • Deve ser possível cadastrar uma imagem para um carro.
    • Deve ser possível listar todos os carros.

    RNF

    • Deve se utilizar o multer para upload dos arquivos.

    RN

    • Não deve ser cadastrar uma imagem para um carro não cadastrado.
    • Deve ser possível cadastrar mais de uma imagem para o mesmo carro.
    • O usuário responsável pelo cadastro deve ser um usuário admin.

    Aluguel de carro

    RF

    • Deve ser possível alugar um carro.

    RNF

    RN

    • Deve ser possível alugar um carro disponível, com duração de no mínimo 24hrs.
    • Não deve ser possível alugar um carro não disponível.
    • Não deve ser possível alugar mais de um carro por usuário.

    Visit original content creator repository
    https://github.com/luizsmatos/rentx-api

  • SPPU-2019-TE-AI-Lab

    SPPU-2019-TE-AI-Lab

    SPPU Computer Engineering Third Year (TE) Artificial Intelligence (AI) Lab Assignments (2019 Pattern)

    Aledutron Youtube PPS Lab Playlist Link: https://www.youtube.com/playlist?list=PLlShVH4JA0ot3KGVHgl8FVTl8-JNCrPP5

    Question No. Problem Statement Code Link Youtube Link
    Group A
    1 Implement depth first search algorithm and Breadth First Search algorithm, Use an undirected graph and develop a recursive algorithm for searching all the vertices of a graph or tree data structure. Group-A/Q1.py https://www.youtube.com/watch?v=Esh4Qf_t9Bw&list=PLlShVH4JA0ot3KGVHgl8FVTl8-JNCrPP5&index=1&pp=iAQB
    2 Implement A star Algorithm for any game search problem.
    3 Implement Greedy search algorithm for any of the following application:
    I. Selection Sort
    II. Minimum Spanning Tree
    III. Single-Source Shortest Path Problem
    IV. Job Scheduling Problem
    V. Prim’s Minimal Spanning Tree Algorithm
    VI. Kruskal’s Minimal Spanning Tree Algorithm
    VII. Dijkstra’s Minimal Spanning Tree Algorithm
    Group-A/Q3.py https://www.youtube.com/watch?v=tGsQ50rC2SA&list=PLlShVH4JA0ot3KGVHgl8FVTl8-JNCrPP5&index=2&pp=iAQB
    Group B
    4 Implement a solution for a Constraint Satisfaction Problem using Branch and Bound and Backtracking for n-queens problem or a graph coloring problem. Group-B/Q4A.py
    Group-B/Q4B.py
    https://www.youtube.com/watch?v=1j9vvQWVblc&list=PLlShVH4JA0ot3KGVHgl8FVTl8-JNCrPP5&index=3&pp=iAQB
    https://www.youtube.com/watch?v=N1qfrKSbS1Q&list=PLlShVH4JA0ot3KGVHgl8FVTl8-JNCrPP5&index=4&pp=iAQB
    5 Develop an elementary chatbot for any suitable customer interaction application.
    Group C
    6 Implement any one of the following Expert System
    I. Information management
    II. Hospitals and medical facilities
    III. Help desks management
    IV. Employee performance evaluation
    V. Stock market trading
    VI. Airline scheduling and cargo schedules


    Visit original content creator repository
    https://github.com/ganimtron-10/SPPU-2019-TE-AI-Lab

  • ttt-plus-plus

    TTT++

    This is an official implementation for the paper

    TTT++: When Does Self-supervised Test-time Training Fail or Thrive? @ NeurIPS 2021
    Yuejiang Liu, Parth Kothari, Bastien van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, Alexandre Alahi

    TL;DR: Online Feature Alignment + Strong Self-supervised Learner 🡲 Robust Test-time Adaptation

    • Results
      • reveal limitations and promise of TTT, with evidence through synthetic simulations
      • our proposed TTT++ yields state-of-the-art results on visual robustness benchmarks
    • Takeaways
      • both task-specific (e.g. related SSL) and model-specific (e.g. feature moments) info are crucial
      • need to rethink what (and how) to store, in addition to model parameters, for robust deployment

    Synthetic

    Please check out the code in the synthetic folder.

    CIFAR10/100

    Please check out the code in the cifar folder.

    Citation

    If you find this code useful for your research, please cite our paper:

    @inproceedings{liu2021ttt++,
      title={TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?},
      author={Liu, Yuejiang and Kothari, Parth and van Delft, Bastien Germain and Bellot-Gurlet, Baptiste and Mordan, Taylor and Alahi, Alexandre},
      booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
      year={2021}
    }

    Contact

    yuejiang [dot] liu [at] epfl [dot] ch

    Visit original content creator repository https://github.com/vita-epfl/ttt-plus-plus
  • ttt-plus-plus

    TTT++

    This is an official implementation for the paper

    TTT++: When Does Self-supervised Test-time Training Fail or Thrive? @ NeurIPS 2021
    Yuejiang Liu, Parth Kothari, Bastien van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, Alexandre Alahi

    TL;DR: Online Feature Alignment + Strong Self-supervised Learner 🡲 Robust Test-time Adaptation

    • Results
      • reveal limitations and promise of TTT, with evidence through synthetic simulations
      • our proposed TTT++ yields state-of-the-art results on visual robustness benchmarks
    • Takeaways
      • both task-specific (e.g. related SSL) and model-specific (e.g. feature moments) info are crucial
      • need to rethink what (and how) to store, in addition to model parameters, for robust deployment

    Synthetic

    Please check out the code in the synthetic folder.

    CIFAR10/100

    Please check out the code in the cifar folder.

    Citation

    If you find this code useful for your research, please cite our paper:

    @inproceedings{liu2021ttt++,
      title={TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?},
      author={Liu, Yuejiang and Kothari, Parth and van Delft, Bastien Germain and Bellot-Gurlet, Baptiste and Mordan, Taylor and Alahi, Alexandre},
      booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
      year={2021}
    }

    Contact

    yuejiang [dot] liu [at] epfl [dot] ch

    Visit original content creator repository https://github.com/vita-epfl/ttt-plus-plus
  • Nirbhaya

    Nirbhaya

    Team Raksha/SRMIST MOZOHACK 19/Nirbhaya App
    Nirbhaya App – Push 01

    Tasks Covered: Data Pre-processing

    • imported COBRA crime datasets ranging from 2008-2019.

    • removed redundant features/columns

    • removed null/missing entries

    • removed outlier values with low frequencies

    • split 24 hours into 12 slots (12:00 AM – 2:00 AM, 2:00 AM – 4:00 AM, and so on…)

    • labelled each record with a day of year (1-365)

    • suitably labelled each record for the above changes

    Next Task : create hierarchy of time block, day, neighbourhood respectively, with count attribute.
    : pass to a k-means classifier to get 3 distinct labelled categories of crime.
    : will use categories obtained to gauge whether an area is dangerous or not.

    Nirbhaya App – Push 02

    • created hierarchical structure to label according to crime level

    • removed lower lying values from dataset to reduce complexity of said structure

    Next Task:
    -implementing k-means to label crime levels according to general trends.
    -use these labels to gauge whether an area is dangerous or not

    Nirbhaya App – Push 03

    -Used k-means clustering algorithm on unlabelled dataset created, in order to form 3 separate clusters.

    -assigned a category value (1,2,3) to each row.

    Next Task: build a progressive web-app and use the category wise safety results from the dataset to avoid unsafe areas along the route.

    Visit original content creator repository
    https://github.com/mozohack/Nirbhaya

  • cdpcurl

    cdpcurl

    Curl like tool with CDP request signing. Inspired by and built from awscurl. See that repository’s README for installation and usage instructions beyond what is provided here.

    Building

    Create a virtualenv if desired.

    # For typical virtualenv
    $ virtualenv cdpcurlenv
    $ . cdpcurlenv/bin/activate
    # For pyenv
    $ pyenv virtualenv cdpcurlenv
    $ pyenv activate cdpcurlenv

    Then, in this directory:

    $ pip install .
    

    Usage

    Run cdpcurl --help for a complete list of options.

    Before using cdpcurl, generate an access key / private key pair for your CDP user account using the CDP management console. You have two options for passing those keys to cdpcurl:

    • pass the keys to cdpcurl using the --access-key and --private-key options
    • (recommended) create a profile in $HOME/.cdp/credentials containing the keys, and then use the --profile option in cdpcurl calls

    [myuserprofile]
    cdp_access_key_id = 6744f22e-c46a-406d-ad28-987584f45351
    cdp_private_key = abcdefgh...................................=
    

    Most CDP API calls are POST requests, so be sure to specify -X POST, and provide the request content using the -d option. If the -d option value begins with “@”, then the remainder of the value is the path to a file containing the content; otherwise, the value is the content itself.

    To form the URI, start by determining the hostname based on the service being called:

    • iam: iamapi.us-west-1.altus.cloudera.com
    • all other services: api.us-west-1.cdp.cloudera.com

    The correct URI is an https URL at the chosen host, with a path indicated for your desired endpoint in the API documentation.

    Examples

    Get your own account information:

    $ cdpcurl --profile demo -X POST -d '{}' https://iamapi.us-west-1.altus.cloudera.com/iam/getAccount

    List all environments:

    $ cdpcurl --profile sandbox -X POST -d '{}' https://api.us-west-1.cdp.cloudera.com/api/v1/environments2/listEnvironments

    Request Signing

    A CDP API call requires a request signature to be passed in the “x-altus-auth” header, along with a corresponding timestamp in the “x-altus-date” header. cdpcurl constructs the headers automatically. However, if you would rather use a different HTTP client, such as ordinary curl, then you may directly use the cdpv1sign script within cdpcurl to generate these required headers. You may then parse the header values from the script output and feed them to your preferred client. Note that CDP API services will reject calls with timestamps too far in the past, so generate new headers for each call.

    $ cdpv1sign -X POST https://api.us-west-1.cdp.cloudera.com/api/v1/environments2/listEnvironments
    Content-Type: application/json
    x-altus-date: Fri, 28 Aug 2020 20:38:38 GMT
    x-altus-auth: (very long string value)

    The signature algorithm specification is available from the API documentation.

    License

    Copyright (c) 2020, Cloudera, Inc. All Rights Reserved.

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU Affero General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
    GNU Affero General Public License for more details.

    You should have received a copy of the GNU Affero General Public License
    along with this program. If not, see https://www.gnu.org/licenses/.


    The GNU AGPL v3 is available in LICENSE.txt.

    Additional license information is available in NOTICE.txt.

    Visit original content creator repository
    https://github.com/cloudera/cdpcurl