Live Instructor Led Online Training Tools in Programming II courses is delivered using an interactive remote desktop! .
During the course each participant will be able to perform Tools in Programming II exercises on their remote desktop provided by Qwikcourse.
Select among the courses listed in the category that really interests you.
If you are interested in learning the course under this category, click the "Book" button and purchase the course. Select your preferred schedule at least 5 days ahead. You will receive an email confirmation and we will communicate with trainer of your selected course.
PlanarTrainer
PlanarTrainer is an open source Computer Vision (CV) tool for finding, matching and selecting and then saving selected features from a training image to a database (currently just flat YAML, JSON and XML files). It also supports testing pose algorithms using matched features. As the name would suggest, it is targeted mostly at planar feature matching, although it does support right clicking keypoints to save manually measured 3D information, or to load correspondences between 3D coordinates in a training image with image locations in a query image using a text file (see also for 3D pointcloud to 2D feature selection, although the UI still requires some work. )
The goal of the course is to implement a convolution neural network to determine whether the person in a portrait image is wearing glasses or not and train it on the Celeb images, tune hyperparameters and apply regularization to improve performance. The celebA dataset is used for training and testing the convolutional neural network. The CelebA dataset is not included in the repository due to its large size.
Image Classifier CLI
An easy-to-use CLI tool for training and testing image classifiers.
Key Features
Can handle ANY image size (but you need to specify it!)
Can handle ANY number of labels
Limitations
All data are assumed to be of SAME size
Classes are based only on existing data
This node server allows saving individual layers so that multiple computers can work in parrallel to train a multilayer model Sept 8, 2018 Still a work in progress but is basically working. Note: I can't run this from github so people will have to load their own node server. I made this on cloud 9 ( http:c9.io now absorbed by AWS) so not sure how it will work on your server. On cloud nine use these steps. (On your machine you may have to place "sudo" infront of each step)
Environments and simulators for Learning Algorithms
Collection of environments, simulators and competitions for training & benchmarking Reinforcement Learning and AI algorithms.
Collections of environments
Gym. Collection of classic environments for benchmarking RL, such as Atari, MuJoco, etc (OpenAI).
Gym Universe. Huge collection of various environments for benchmarking RL (OpenAI). ALE. Arcade Learning Environment with Atari games (Marc Bellemare).
Pycolab. Customized Grid-World env (DeepMind).
Vehicle Simulation
Carla (Intel, Toyota). AirSim. Realistic autonomous vehicle simulator (Microsoft).
Navigation
Deepmind Lab. 3D Navigation in Labyrinths (Deepmind). VizDoom. 3D Shooting and Navigation Doom game.
Project Malmo. 3D Navigation and Quest Solving in Minecraft game (Microsoft). AI2Thor. Home indoor 3D Navigation.
HoME Platform. Home indoor 3D Navigation. Based on SUNCG dataset.
MINOS. Home indoor 3D Navigation. Based on SUNCG and Matterport3D datasets (Intel). House3D. Home indoor 3D Navigation & Visual Question Answering. Based on SUNCG dataset (Facebook).
GibsonEnv. Home indoor 3D Navigation & Locomotion. Based on Gibson, SUNCG, Stanford 2D3DS and Matterport 3D datasets (Stanford).
Gym-Maze. 2D navigation in customizable mazes.
Strategies
PySC2. Starcraft II strategy learning environment (Deepmind, Blizzard).
TorchCraft. Starcraft I strategy learning environment (Facebook).
Locomotion
Roboschool. Locomotion, replicates proprietary MoJoCo environments with additional improvements; OpenAI
Control Suite. Set of locomotion environments based on MuJoCo physics engine (DeepMind).
Multi-Agent RL
PommerMan. Multi-Agent (up to 4 players) "Bomberman"-like game.
Talk to Me Goose (TTMG)
A League of Legends training tool
What is TTMG?
This is a simple application for helping you improve your League of Legends skills. The idea is simple, have someone hinting at you to remember to do important things throughout the game. Many people have recommended having a metronome clicking in the background to teach to you look at the mini-map. I've taken that idea a bit further to allow you to have your computer speak a custom set of phrases to you as you play the game to remind you to do many other important things while you play.
Es - Rl
Training of neural networks using variations of 'evolutionary' methods including the 'Evolutionary Strategy' presented by OpenAI and Variational Optimization.
Local installation
To create a new environment with the required packages, run or to update an existing environment to include the required packages, run Any of these two commands will create an Anaconda virtual environment called ml
HPC installation
To run the code on the High Performance Computing Cluster at the Technical University of Denmark first of all requires a user login.
Pip
The easiest way to create the environment on the HPC is using pip. The script hpc_python_setup.sh
will setup up the environment. The environment is called mlenv
in this case.
Anaconda
Anaconda can be installed on the HPC. Get the latest 64 bit x86 version from .
Move the downloaded .sh
file to the root of the HPC.
Install Anaconda by calling bash Anaconda3-5.0.1-Linux-x86_64.sh
at the root.
Follow the installation instructions. My personal root directory is /zhome/c2/b/86488/
Executing jobs on HPC
Connecting
A connection to the HPC can be established by SSH by A local mirror of the user folder on the HPC can be created by sshfs
Submitting
A single job can be run (not submitted) by executing the run_hpc.sh
script. An entire batch of jobs can be submitted using the submit_batch_hpc.sh
script. The specific inputs to each of the jobs must be specified in this script in the INPUTS
array. An example call to submit_batch_hpc.sh
which is This will submit a series of jobs named "SM-experiment-[id]" with wall clock time limit of 10 hours, requesting a 24 core machine on the hpc queue
Monitoring
The data-analysis/monitor.py
script allows for monitoring of multiple jobs running in parallel, e.g. on the HPC. The script takes a directory of checkpoints as input and uses the saved stats.pkl
file. It saves summarizing plots in the source checkpoint folder and displays statistics in the console.
Character level language model
A Recurrent Neural Network for training and sampling character-level language models in Tensorflow. In the example below we use a list of dutch cities as input and we generate new city names by learning the character level patterns in the existing names. The model generates new sequences of characters using the patterns in the input sequence.
BillWiz
Why we developed it?
It is a group work for software system development capability training in NPU(Northwestern Polytechnical University),
Functions
1.Register.
2.Log in and log out.
3.Add bill to account book.
4.Add a tag for each bill.
5.Edit bill record.
6.Check for bill log in the view of TODAY, MONTH, TAG, and CUSTOM.
7.Set an upper limit for one month, it will remind you if you spend too much money!
8.Check the Aboout for help.
About
When we learned to write Android application, we used a lot of open source code to demo and test.
Mambas is a web based visualization tool to manage your Keras projects and monitor your training sessions.
As of today, the following functionalities have been implemented:
Multilayer-descriptors-for-medical-image-classification
Developing a method for improving the performance of 2D descriptors by building an n-layer image using different preprocessing approaches from which multilayer descriptors are extracted and used as feature vectors for training a Support Vector Machine. The different preprocessing approaches are used to build different n-layer images (n 143, n 145, etc.). We test both color and gray-level images, two well-known texture descriptors (Local Phase Quantization and Local Binary Pattern), and three of their variants suited for n-layer images (Volume Local Phase Quantization, Local Phase Quan- tization Three-Orthogonal-Planes, and Volume Local Binary Patterns).
Few Shot Learning
The process of learning good features for machine learning applications can be very computationally expensive and may prove difficult in cases where little data is available. A prototypical example of this is the one-shot learning setting, in which we must correctly make predictions given only a single example of each new class. Here, I explored the power of One-Shot Learning with a popular model called "Siamese Neural Network".
Table of Contents
Welcome to MoMo API Developer Training Slides
Hello and welcome to the MoMo Developer Training. Some of the themes we shall explore today: Let's get started
Introduction
So, why an open API?
MTN Uganda posits that an Open API will enable third parties to easily develop, test and deliver new value propositions are likely to produce innovative solutions that entice customers to transact more digitally.
Possible use cases to explore
Authorization and Authentication
API Requests with Curl
Sample Code Walkthrough
Best Practices
WiseOwl
This is a Fact based Question Answering System using Apache Solr as backend search engine, Wikipedia dumps as information source, Apache velocity , Html, Css for Web interface Design. The project also uses Linux bash script to perform its various functions like start,stop,training and indexing
Features:
Fast and reliable searching using open source Apache Solr 6.3.0 and Apache Lucene 6.3.0 projects. Apache Solr is used as a search engine which uses capabilities of Apache Lucene to profide searching.
Custom-made Query Parser based on Apache Lucene 6.3.0 specially optimized for Question Answering.
Named Entity Recognition and Time normalization during indexing using StanfordCoreNLP.
Automatic cleaning and parsing of Wikipedia Raw text from the wikipedia dumps. It is achieved by using Lucene 6.3 benchmark classes and WikiClean Project.
Answer Type Classification of given question using Apache OpenNLP's Maxent Models, Models are trained on data taken from thesis by Tom Morton, tagging aroung 1800 handnpicked questions.
Currently the project is more optimised for Description Type Answers.
Sleek user interface by combining elements of css, html and Apache Velocity.
Bash script which uses underlying solr scripts to provide functionality of starting, stoping, indexing and training.
Raspberry Pi Model Zero
This is the base Nerves System configuration for the Raspberry Pi Zero and Raspberry Pi Zero W. Image credit
Supported OTG USB modes
The base image activates the dwc2 overlay, which allows the Pi Zero to appear as a device (aka gadget mode). When plugged into a host computer via the OTG port, the Pi Zero will appear as a composite ethernet and serial device. When a peripheral is plugged into the OTG port, the Pi Zero will act as USB host, with somewhat reduced performance due to the dwc_otg driver used in other base systems like the official nerves_system_rpi.
Supported WiFi devices
The base image includes drivers for the Red Bear IoT pHAT and the onboard Raspberry Pi Zero W wifi module (brcmfmac driver). If you are using another WiFi module (for example, a USB module), you will need to create a custom system image. Before doing this, check if the better for you. That image configures the USB port in host mode by default and is probably more appropriate for your setup.
This is Multilingual Semantic Role Labeler being modeled for Chinese. This project is the master project containing all relevant code for dealing with SRL . It includes various modules
Implement the LeNet using tensorflow to recognize handwritten number. Training with MNIST. Some modifications here
Transparent Keras
Transparent Keras aims to provide a very simple way to look under the hood during training of Keras models by defining an extra set of outputs that will be returned by train_on_batch
or test_on_batch
. The API is extremely simple all that is provided is a TransparentModel
that accepts an extra constructor keyword argument observed_tensors
. The created model should behave exactly like a Keras model except for the functions (train|test)_on_batch
, which return the extra tensors as after their normal return values.
Data-Mining
Preprocessed and analyzed the housing affordability dataset using SAS.
Predicted the current market value of a house/apartment by dentifying the main criteria that determines this value.
Implemented regression analysis and determined the significant variables.
Split original dataset into training and test datasets to score each models ability to predict the correct values for the target variable.
NeuralNetworkQuadraticLinePredictor
This program uses a standard multilayer perceptron neural network which during the first part of the program is trained to recognize the pattern of a line drawn using a quadratic equatinon. The second part of the program automatically predicts points of the line based on inputs, by making use of the training given to it earlier. The program is designed with object oriented programming, and is very flexible in being able to handle multiple inputs, outputs and hidden layers.
KerasGym
This package was written to simplify the task of keeping track of keras
deep learning models while working on a real-world problem. I found that being able to re-visit previous experiments, especially for comparing training curves and using saved models for prediction, was rather useful. This gym is under construction.
Quick start
Requires keras
.
Splitting-Datasets
A C++ tool to split a dataset into the training and test set, and split the trainind dataset for K-fold cross validation It also record image file names in each subset and save them in a seperate text file two modes
basic_split, cross_validation example command input(7 classes, 5-fold cross-validation):
./dataset 0123456 training(generated in basic_split) test(generated in basic_split) after cross_validation ./dataset/traning cr1cr2cr3cr4cr5
Smart Stylus
An offline handwriting recognition pen like hardware and tensorflow based model implementation that will type what you write with it. Right now it supports english alphabets and numbers. That are 62 symbols!
Screenshot
Smart Stylus from scratch
Trained Model in Action
Basic Hardware Concept
The circuitry used by a mouse for recognizing and tracking movement have been embedded in the structure of a pen . Thus ,when you use the pen to write the CNN converts them to letters and produces a typed result.
Model
Structure
Can I edit the model?
Yes, Model structure is stored seperately in models/cnn.py
. Train.py
expects following function from a model script And it should return x, y, y_true, optimizer
Training.CSharp Workshop
This workshop is intended to be used as a tool by an instructor working with those interested in learning to program in C#. As such there are some gaps where there is assumed knowledge or a mentor ready to assist. For those going solo, Google is your friend. Start by opening the Word document under the "docs" folder and follow the instructions for installing Visual Studio 2017 Community Edition and start the lessons. The rest of the files here excluding the .md files are the C# Workshop source files. By reviewing the Commits you'll see every lesson is a changeset that you can compare with your code if you run into any problems. You'll be introduced to Visual Studio, an integrated development environment (IDE), which is where you type and compile your code. You could use notepad and other tools, but I prefer Visual Studio. This workshop heavily promotes unit tests. Introductory discussions on design patterns including repositories are brought up, though nothing in-depth. Once you reach the object-oriented programming (OOP) descriptions, don't worry if you don't get it right away. Continue with the workshop and slowly you'll come to put the pieces together. This section is by far the hardest concept for new developers to grasp, so don't despair. There are plenty of WIKI articles on the web that can further assist you. The key point for this workshop is to keep doing the workshop as doing it helps you get to the AHA moment where everything comes together. If for any reason the installation instruction or extensions don't work as expected, please realize that these tools change very frequently. A quick search on the web will usually help you find what you need. Always try the StackOverflow articles first as they are usually more relevant and on point.
Caffe Monitoring
A simple tool for monitoring caffe training process. Clone this repository into your public_html folder to be able to monitor your network optimisation from a web browser. Loss and accuracy charts are plotted automatically and updated at a chosen time interval.
Plotting loss
Redirect caffe output to a file: caffe train -solver=solver.prototxt -weights=VGG_FACE.caffemodel -gpu=0 2>&1 | tee log.txt
Make sure log.txt has reading permissions.
In the caffe-monitoring directory, create a symbolic link to your log.txt file: ln -s /path/to/log.txt log.txt
When accessing your public_html page from a web browser, if log.txt is listed under caffe-monitoring directory, then you are good to go.
Open caffe.html and type log.txt in the Filename input.
Choose polling interval for chart updates (defaults to 60 seconds).
Press start button.
Plotting accuracies
If you would like to also plot test accuracies, you may want to write your own python layer for that. As the caffe-monitoring tool uses regular expressions to fetch data from caffe logs, some rules should be followed when printing your results.
To plot individual accuracies for each class of your problem: print "Test result: class = {0}, accuracy = {1}".format(class, '%.3f' % accuracy)
To plot the mean accuracy: print "Test result: mean, accuracy = {1}".format(class, '%.3f' % numpy.mean(accuracies))
To print class labels instead of numbers: print "Label for class {0} = {1}".format(class, class_label)
Do not forget to force the buffer to stdout after the these printing: sys.stdout.flush()
In the Classes input, type the classes your would like to plot values to. The list must be comma-separated. Ex: 0,1,2.
Press stop button.
Press start button.
FaceTrainAndDetect
Training and detecting facial models base on Opencv
Requirements
WebCamera
Windows 7 or later
OpenCV 3.0(with opencv_contrib) or later
Microsoft Visual Studio 2015 In my test, folder [ORLface]() contains facial samples of 40 people (each person has 10 picture samples). You can train your own samples by placing the samples in the specified folder. You also can catch facial samples by your webcamera, the program will save the facial images in the specified folder automatically.
Progress
2017-07-01 The code can train the model successfully. Through the test, the models is available.
2017-07-02 Improved the logic of loading facial samples. The directory structure such as -->SampleDIR-->label(int)-->xxx.bmp
2017-08-04 Added automatic catch facial sample program. Through the test, using the same webcamera for sample collection and facial testing, the results are better.
Kafdrop is a web UI for viewing Kafka topics and browsing consumer groups. The tool displays information such as brokers, topics, partitions, consumers, and lets you view messages. This project is a reboot of Kafdrop 2.x, dragged kicking and screaming into the world of JDK 11+, Kafka 2.x, Helm and Kubernetes. It's a lightweight application that runs on Spring Boot and is dead-easy to configure, supporting SASL and TLS-secured brokers.
View Kafka brokers topic and partition assignments, and controller status
View topics partition count, replication status, and custom configuration
Browse messages JSON, plain text and Avro encoding
View consumer groups per-partition parked offsets, combined and per-partition lag
Create new topics
View ACLs
Support for Azure Event Hubs
Requirements
Java 11 or newer
Kafka (version 0.11.0 or newer) or Azure Event Hubs Optional, additional integration:
Schema Registry
APT Simulator is a Windows Batch script that uses a set of tools and output files to make a system look as if it was compromised. In contrast to other adversary simulation tools, APT Simulator is deisgned to make the application as simple as possible. You don't need to run a web server, database or any agents on set of virtual machines. Just download the prepared archive, extract and run the contained Batch file as Administrator. Running APT Simulator takes less than a minute of your time.
Bootkube is a tool for launching self-hosted Kubernetes clusters. When launched, bootkube will deploy a temporary Kubernetes control-plane (api-server, scheduler, controller-manager), which operates long enough to bootstrap a replacement self-hosted control-plane. Additionally, bootkube can be used to generate all of the necessary assets for use in bootstrapping a new cluster. These assets can then be modified to support any additional configuration options.
CloudFail is a tactical reconnaissance tool which aims to gather enough information about a target protected by Cloudflare in the hopes of discovering the location of the server. Using Tor to mask all requests, the tool as of right now has 3 different attack phases.
Please feel free to contribute to this project. If you have an idea or improvement issue a pull request!
In the field of Tools in Programming II learning from a live instructor-led and hand-on training courses would make a big difference as compared with watching a video learning materials. Participants must maintain focus and interact with the trainer for questions and concerns. In Qwikcourse, trainers and participants uses DaDesktop , a cloud desktop environment designed for instructors and students who wish to carry out interactive, hands-on training from distant physical locations.
For now, there are tremendous work opportunities for various IT fields. Most of the courses in Tools in Programming II is a great source of IT learning with hands-on training and experience which could be a great contribution to your portfolio.
Tools in Programming II Online Courses, Tools in Programming II Training, Tools in Programming II Instructor-led, Tools in Programming II Live Trainer, Tools in Programming II Trainer, Tools in Programming II Online Lesson, Tools in Programming II Education