Blog

  • Super_Hero_Hunter

    Super_Hero_Hunter

    This is a project on Superhero Hunter on Marvel API

    A web application that allows users to search for their favorite superheroes and add them to their list of favorites. This project uses the Marvel API to fetch superhero data.

    Table of Contents

    Introduction

    Super Hero Hunter is a web application developed using vanilla JavaScript. It leverages the Marvel API to fetch information about superheroes. Users can see home page with Superheroes and can search for superheroes, view detailed information about them, and add them to their list of favorite superheroes.

    Features

    • Home page shows Superheroes cards.
    • Search for superheroes by name.
    • View detailed information about each superhero, including their name, photo, biography, comics, events, series, and stories.
    • Add superheroes to your list of favorite superheroes.
    • Remove superheroes from your list of favorite superheroes.

    Demo

    Watch a brief demo of the Super Hero Hunter application on YouTube.

    Installation

    To run the project locally, follow these steps:

    1. Clone the GitHub repository:

      git clone https://github.com/ParmodKumar28/Super_Hero_Hunter.git
    2. Open the project directory:

      cd Super_Hero_Hunter
    3. Open the index.html file in your web browser to launch the application.

    Usage

    • Home page shows the Superheroes.
    • Enter a superhero name in the search bar to search for superheroes.
    • Click the “More Info” button on a superhero card to view detailed information about that superhero.
    • Click the heart icon to add a superhero to your list of favorite superheroes.
    • Visit the “Favourites” page to see your list of favorite superheroes.
    • Click the “Remove” button to remove a superhero from your list of favorite superheroes.

    Persistence

    The list of favorite superheroes is stored locally using localStorage. This means your favorite superheroes will persist even after closing the browser.

    Contributing

    Contributions are welcome! If you have any improvements or suggestions, please create a pull request.

    Visit original content creator repository

  • discord-bot

    Discord Bot MegaOP Discord

    Discord Bot running designhub’s Discord Server based on GuideBot from AnIdiotsGuide written in discord.js
    Fell free to contribute if any feature’s missing


    Requirements

    You also need your bot’s token. This is obtained by creating an application in
    the Developer section of discordapp.com. Check the first section of this page
    for more info.

    Installing Canvas Dependencies

    OS Command
    OS X sudo brew install pkg-config cairo pango libpng jpeg giflib
    Ubuntu sudo apt-get install libcairo2-dev libjpeg8-dev libpango1.0-dev libgif-dev build-essential g++
    Fedora sudo yum install cairo cairo-devel cairomm-devel libjpeg-turbo-devel pango pango-devel pangomm pangomm-devel giflib-devel
    Solaris pkgin install cairo pango pkg-config xproto renderproto kbproto xextproto
    Windows Instructions on their wiki

    Downloading

    In a command prompt in your project’s folder (wherever that may be) run the following:

    git clone https://github.com/dsgnhb/discord-bot.git

    Once finished:

    • In the folder from where you ran the git command, run cd discord-bot and then run npm install, this will install all required packages, then it will run the installer.

    • You will be prompted to supply a number of access tokens and keys for various platforms, please follow the on screen instructions to complete the installation.

    NOTE: A config file will be created for you.

    Starting the bot

    To start the bot, in the command prompt, run the following command:
    node app.js

    Inviting to a guild

    To add the bot to your guild, you have to get an oauth link for it.

    You can use this site to help you generate a full OAuth Link, which includes a calculator for the permissions:
    https://finitereality.github.io/permissions-calculator/?v=0

    Contributing

    This project uses gitmoji for all commit messages:

    Gitmoji is an initiative to standardize and explain the use of emojis on GitHub commit messages. Using emojis on commit messages provides an easy way of identifying the purpose or intention of a commit.


    Authors: flo (https://flooo.me), lukas (https://lukaas.de), fin (https://angu.li), alex (https://github.com/CreepPlaysDE)

    Visit original content creator repository

  • espway

    ESPway

    A segway-like balancing two-wheel robot built on ESP8266. It is controlled over WiFi via a HTML/JS GUI and a WebSocket. This is a work in progress. The firmware is meant to run on a WEMOS D1 mini or a similar board.

    NOTE for existing users: the Arduino version of this code is deprecated and will not receive the newest improvements from the main branch. However, the Arduino branch will remain in this repo. If you’d like to have something improved or fixed there, please open an issue or preferably a pull request.

    Getting started

    The firmware is built on the esp-open-rtos project which is included as a submodule in this repo.

    For building the firmware, a Linux host is recommended. For other platforms, there is a Docker image one can use. The docker-run script in this repository contains the command that’s required to download the image and run commands in the container. It should work fine on macOS hosts. For Windows hosts, I still recommend installing Linux in a virtual machine and installing the tools there.

    Installing the tools and building

    NOTE: if you intend to use Docker, you only have to install esptool, git and make on your host. Please see below for further Docker instructions.

    Using the Docker image

    First, install Docker.

    Building the firmware using the supplied Docker image is easy. Instead of running make parallel, just run ./docker-run make parallel in the root folder of the repo. The command supplied to the script will be run in the Docker container, and the image will be automatically downloaded.

    Flashing the image differs, though, and for this you’ll need make on your host. Instead of

    make flash ESPPORT=/dev/ttyUSBx
    

    (/dev/ttyUSBx being the ESP’s serial port) run

    make flash-only ESPPORT=/dev/ttyUSBx
    

    in your host shell. The separate flash-only target is needed because the flash target would try to build the firmware. In the future, it is intended to provide a separate Python script for flashing, lifting the need for make on host.

    TODO: docker-compose configuration file for even easier usage

    Installing the tools manually (not recommended, not guaranteed to work)

    WARNING: As of currently, the compiler installed by this method seems to have issues compiling the code. Thus this method is not recommended. Please see above for the Docker-based toolchain instead.

    Install these packages:

    • git (from your package manager)
    • xtensa-lx106-elf toolchain. You can use esp-open-sdk to build it, see the instructions in the esp-open-rtos repo.
    • esptool (pip install -U esptool). Please do not use the outdated version pulled by esp-open-sdk.
    • tcc, nodejs and npm are currently required due to the frontend code, although I’m investigating how to relax this dependency.

    Clone this repo (recursive cloning to get also esp-open-rtos and its submodules):

    git clone --recursive https://github.com/flannelhead/espway.git
    

    Enter the directory and build the firmware:

    make parallel
    

    The parallel target does a clean build with 5 parallel worker threads to make it faster.

    Plug your ESP8266 module in via USB and flash the firmware:

    make flash
    

    The default port is /dev/ttyUSB0. If you need to change this, use

    make flash ESPPORT=/dev/ttyUSBx
    

    Supported browsers

    Please use the latest Firefox or Chrome if possible. The HTML/JS UI uses some
    recent JavaScript features which might not be supported by older browsers. However if you encounter any issues on e.g. Safari, don’t hesitate to file them in the issue tracker.

    Schematic & BOM

    There is a PCB design and schematic in the schematic folder which has been tested and is ready to use. There, MPU6050 has been replaced by LSM6DS3 and L293D by DRV8835 for better performance.

    Meanwhile, you can still build an ESPway using breakout boards available from the usual sources. A rough bill of materials for this is listed below:

    • (not including PCB, connectors, wire etc. materials)
    • WEMOS D1 Mini board
    • GY-521 (MPU6050 breakout board)
    • L293D motor driver IC
    • 2x 6V 300rpm metal gear motor (search for “12ga 300rpm” or “n20 300rpm”), these should be $3-5 per piece
    • 2x WS2812B neopixels for eyes and showing current state
    • AMS1117 5V regulator
    • 5x 100n ceramic capacitors
    • 2x 1000u 10V electrolytic capacitor
    • 470 ohm resistor
    • 10 kohm resistor
    • 680 kohm resistor

    To use the old hardware config and schematic, you’ll have to edit src/espway_config.h before compilation. See that file for notes.

    See the schematic-old folder for the schematic. It is drawn with KiCad and there’s a rendered PDF in the repo.

    The new schematic in schematic folder uses components
    from kicad-ESP8266 by J. Dunmire,
    licensed under CC-BY-SA 4.0. The schematic is also licensed under CC-BY-SA 4.0.

    Developing the frontend

    The HTML/JS frontend uses Webpack as the build system. You will need NodeJS and NPM (the package manager) to build the frontend pages. It does jobs like bundling the JavaScript modules together, minifying and transpiling the ES2015 code for older browsers, compiling Riot tags, minifying/autoprefixing CSS etc.

    After you’ve cloned this repo, run npm install in the root folder to install the package dependencies. There are two commands specified in package.json which run Webpack:

    • npm run serve start a web server which serves the static HTML files in the frontend/output directory. It also watches for changes in the source files in frontend/src and automatically rebuilds and reloads them. Use this while hacking on the frontend. TODO: Currently this doesn’t see changes to HTML files, though.
    • npm run build produces a production build of the frontend JS/CSS bundles. Use this before uploading your work to the ESP8266.

    License and acknowledgements

    The project is licensed under LGPLv3.

    The project uses esp-open-rtos which is licensed under the BSD 3-clause license. It also has some components which have different licenses. Read more about them in the esp-open-rtos README.

    Visit original content creator repository

  • Myrient-Downloader-GUI

    Myrient-Downloader-GUI

    GUI that manages the downloading and processing video game ROMs/ISOs from Myrient Video Game Preservationists. Written in Python!

    Features

    • Multi-Platform Support: Download ROMs/ISOs from multiple gaming platforms
      • Available platforms are easily extensible via YAML configuration
    • PS3 ISO Decryption & Extraction: Option to decrypts and extract downloaded PS3 software for use on consoles and/or emulators like RPCS3
    • User-Friendly Setup: Required binaries are retrieved & set automatically (on Windows)
    • Cross Platform: Should work across all platforms!

    Quick Start

    Pre-built Releases (Recommended)

    1. Download the latest release for your platform
    2. Extract and run the executable
    3. That’s it! File lists/necessary files will be retrieved automatically and stored in config/.

    Required files for PS3 processing – Windows

    • Simply launch Myrient-Downloader-GUI and you should be prompted to automatically download the required tools from their respective repositories!
      • Alternatively, you can download the binaries PS3Dec and extractps3iso from below and manually specify them via Settings

    Required files for PS3 processing – Linux

    # Arch Linux (AUR)
    yay -S ps3dec-git ps3iso-utils-git
    
    # Or install manually from source repositories

    Configuration

    Platform Configuration

    The application uses a dynamic YAML-based configuration system located in config/myrient_urls.yaml, allowing for easy addition of new platforms without changing code.

    Settings Menu

    The destination folders of in-progress and completed downloads for each platform can be defined via the Settings menu. Settings are saved to config/myrientDownloaderGUI.ini.

    Running from Source

    Prerequisites

    # Install Python dependencies
    pip install -r requirements.txt
    
    # Or on Arch Linux:
    sudo pacman -S python-aiohttp python-beautifulsoup4 python-pyqt5 python-requests python-yaml python-pycdlib

    Installation

    # Clone the repository
    git clone https://github.com/hadobedo/Myrient-Downloader-GUI.git
    cd Myrient-Downloader-GUI
    
    # Install dependencies
    pip install -r requirements.txt
    
    # Run the application
    python3 ./myrientDownloaderGUI.py

    Credits/Binaries used:

    TODO

    • Better logging
      • Also add toggle to enable/disable ps3 decryption logs on Windows
    • Terminal interface
    • Consolidate myrient_urls.yaml and myrientDownloaderGUI.ini into a single yaml (maybe)
    • Improve downloading of filelist jsons and myrient_urls.yaml, goal is to make user ‘more aware’ of what’s happening
    • Add dropdown box/’hardcode’ some myrient urls for easier system configuration (maybe)

    Screenshots:

    image image image

    Visit original content creator repository
  • Vigu

    Build Status
    Maintenance

    Vigu

    Authors Jens Riisom Schultz(backend), Johannes Skov Frandsen(frontend)

    Since 2012-03-20

    Vigu is a PHP error aggregation system, which collects all possible PHP errors and aggregates them in a Redis database. It includes a frontend to browse the data.

    This project is based on Zaphod and uses several other projects:

    Requirements

    • git is required for installation.
    • You need apache mod_rewrite.
    • You need the phpredis PHP extension.
    • You need a Redis server, dedicated to this application.

    Optionally you can use a gearman based variant of the daemon, adding the following dependencies:

    Documentation

    • Point your browser to the root of the site, to start browsing errors.

    Installing

    • Clone vigu from git, i.e. git clone http://github.com/localgod/Vigu.git Vigu
    • Run php composer.phar install from command line.
    • Copy vigu.ini.dist to vigu.ini and edit it.
    • Make a vhost, to point at the root of vigu or the web/ folder, or however you choose to serve the site.
    • Set the daemon up, using php handlers/daemon.php -I. The daemon should be running at all times, but it may ONLY run on the Vigu server.
    • Copy vigu.ini to handlers/vigu.ini.
    • Include handlers/shutdown.php before anything else, preferably through php.ini’s auto_prepend_file directive. It has no dependencies besides it’s configuration file, which must exist next to it, so you can safely deploy it alone, to all your servers.

    License

    Copyright 2012 Jens Riisom Schultz, Johannes Skov Frandsen

    Licensed under the Apache License, Version 2.0 (the “License”);
    you may not use this file except in compliance with the License.
    You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0
    

    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an “AS IS” BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.

    Visit original content creator repository

  • Handwriting-Prediction-Model

    Handwriting-Prediction-Model

    This project focuses on the implementation and development of handwritten digit recognition algorithm (back-propagation) using Artificial Neural Networks. It is a classification problem solved using aforementioned algorithm. A total of 400 input layers are used, corresponding to each pixel of the image and 10 output layers, corresponding to each digit from 0-9. The number of nodes in each hidden layer have been set to 25. The implementation of the same is done on Matlab, using Octave-GUI 4.2.1. ANN model was trained with 5000 images of various digits. The algorithm predicted English numerical digits with a maximum accuracy of 99.98%.

    Approach

    The proposed Neural Network Architecture consists of 3 layers i.e. input layer, hidden layer and the output layer. The input and the hidden layer is connected by weights theta1 and the hidden and the output layer is connected by weights theta2. The weighted sum from the hidden as well as the output layer can take any value ranging from 0 to infinity and hence in order to limit the value of the sum it’s passed through an activation function. In this scenario we use sigmoid function as the activation function where the value of sigmoid function always lies between 0 and 1.

    A. Input Layer The input from the outside world/user is given to the input layer. Input is given in the form of a matrix X where the number of training examples is same as the number of rows in the matrix and the 400 pixels extracted from the image is arranged as one single row in the input matrix X. Hence the dimensions of matrix is given as X (Number of Training Examples x 400).

    B. Hidden Layer There is no valid formula to calculate the number of units/nodes in the hidden layer. To minimize computational costs here the number of hidden nodes have been taken as 25.

    C. Output Layer The output layer in this algorithm consists of 10 units with each unit representing various digits from 0-9. Randomly initialize the weights

    The weights connecting the input and the hidden layer as well as the hidden and the output layer are randomly initialized. The range of the weights are as given in Table

    Forward Propagation

    Step-1: The inputs given to the input layer is multiplied with the weights connecting the input and the hidden layer and then passed through sigmoid function. i.e. – Output one = SIGMOID(XTheta_layer_one) Step-2: The Output_one is then multiplied with the weights connecting the hidden and the output layer and then passed through the sigmoid function. i.e.Output_two = SIGMOID (Output_oneTheta_layer_two) Hence this way we clearly obtain the final output of our network.

    Cost Function

    Initial value of the cost function is calculated using the randomly initialized values of weights connecting the input and the hidden layer and weights connecting the hidden and the output layer. Error regularization is taken into account while calculating the value of cost function and adjustments are made for the same.

    Back Propagation

    Back-propagation gives us a way to determine the error in the output of a previous layer given the output of a current layer. The process starts at the last layer and calculates the change in the weights for the last layer. Then we can calculate the error in the output of the previous layer. Using the error in each layer partial derivatives can be calculated for weights connecting the input and the hidden layer as well as weights connecting the hidden and the output layer.

    Screenshot

    predicting 1

    predicting 5

    Visit original content creator repository
  • black

    Black

    A software rasterizer for Rust

    Overview

    Black is a small software rasterizer for Rust. It allows one to create simple 3D visualizations that are run entirely on the CPU. It provides a fairly low level graphics API specifically geared towards triangle rasterization and allows for custom vertex and fragment shaders to be implemented with Rust traits.

    This project was primarily written as an exercise in Rust. It is offered to anyone who finds it of use.

    Building

    This project was built against rustc 1.39.0-nightly (6ef275e6c 2019-09-24). The crate being used to present the window is the mini-fb crate. If building on Windows, you will need Windows C++ build tools. Once installed, just run the following from the project root to start the example project.

    $ cargo run --release

    Example

    The following code renders single RGB triangle. Note that the Varying type must implement Interpolate which performs perspective correct per fragment interpolation across the triangle.

    Note the implementation of TargetBuffer which is used to receive fragment shader output. If this code was output to a window, or other output device, this code will result in the image below.

    Refer to the example project in this repository for an implementation of TargetBuffer. It is leveraging the most excellent mini_fb crate. This should work on Windows, Mac and Linux.

    #[macro_use] extern crate black;
    
    use black::{ TargetBuffer, DepthBuffer, Raster, Interpolate, VertexProgram, FragmentProgram };
    use black::{ Vec4, Vec3, Mat4 };
    
    // ----------------------------------------------
    // Uniform, Vertex and Varying types
    // ----------------------------------------------
    
    struct Uniform {
        projection: Mat4,
        matrix:     Mat4,
        view:       Mat4
    }
    struct Vertex {
        position: Vec4,
        color:    Vec3
    }
    #[derive(Interpolate)]
    struct Varying {
        position: Vec4,
        color:    Vec3
    }
    
    // ----------------------------------------------
    // VertexProgram
    // ----------------------------------------------
    
    struct VertexShader; 
    impl VertexProgram for VertexShader {
        type Uniform = Uniform;
        type Varying = Varying;
        type Vertex  = Vertex;
    
        fn main(&self, uniform: &Uniform, vertex: &Vertex, varying: &mut Varying) -> Vec4 {
            // assign varying to be interpolated across this primitive.
            varying.position = vertex.position;
            varying.color    = vertex.color
            
            // transform the vertex (analogous to gl_Position transform)
            input.position * (uniform.matrix * (uniform.view * uniform.projection))
        }
    }
    
    // -----------------------------------------
    // FragmentProgram
    // -----------------------------------------
    
    struct FragmentShader; 
    impl FragmentProgram for FragmentShader {
        type Uniform = Uniform;
        type Varying = Varying;
    
        fn main(&self, uniform: &Uniform, varying: &Varying) -> Vec4 {
            Vec4::new(
                varying.color.x, 
                varying.color.y, 
                varying.color.z, 
                1.0)
        }
    }
    
    // -----------------------------------------
    // TargetBuffer
    // -----------------------------------------
    
    struct ColorBuffer; 
    impl TargetBuffer for ColorBuffer {
        fn width (&self) -> usize { 256 }
        fn height(&self) -> usize { 256 }
        fn set(&mut self, x: usize, y: usize, color: Vec4) {
            // Invoked per fragment. Take vec4 output from fragment
            // shader and write to output device or other buffer.
        }
    }
    
    fn main() {
        
        // Color and Depth buffers.
        let mut color_buffer = ColorBuffer;
        let mut depth_buffer = DepthBuffer::new(256, 256);
    
        // Sets up the uniforms for this draw. Works
        // in a similar fashion to GLSL uniforms.
        let uniform = Uniform {
            projection: Mat4::perspective_fov(90.0, 1.0, 0.1, 1000.0),
            matrix:     Mat4::identity(),
            view:       Mat4::look_at(
                &Vec3::new(0.0, 0.0, 3.0),
                &Vec3::new(0.0, 0.0, 0.0),
                &Vec3::new(0.0, 1.0, 0.0),
            ),
        }
    
        // Rasterizes this triangle into the given
        // OutputBuffer. Depth values stored in the
        // given depth_buffer.
        Raster::triangle(
            &VertexShader,
            &FragmentShader,
            &mut depth_buffer,
            &mut color_buffer,
            &uniform,
            &Vertex { 
                position: Vec4::new(0.0, 1.0, 0.0, 1.0),
                color:    Vec3::new(1.0, 0.0, 0.0),
            },
            &Vertex {
                position: Vec4::new(-1.0, -1.0, 0.0, 1.0),
                color:    Vec3::new(0.0, 1.0, 0.0),
            },
            &Vertex { 
                position: Vec4::new(1.0, -1.0, 0.0, 1.0),
                color:    Vec3::new(0.0, 0.0, 1.0),
            } 
        );
    }
    Visit original content creator repository
  • Hacktoberfest-2023

    🎉 What is Hacktoberfest?

    Hacktoberfest 2023 is a month-long virtual festival celebrating open-source contributions. It’s the perfect entry point for newcomers to the world of open source!

    Throughout October 2023, all you need to do is contribute to any open-source project and merge at least four pull requests. Yes, you can choose any project and any type of contribution. You don’t need to be an expert coder; it could be a bug fix, an improvement, or even a documentation update!

    Hacktoberfest welcomes participants from all corners of our global community. Whether you’re an experienced developer, a coding enthusiast just starting out, an event organizer, or a company of any size, you can help drive the growth of open source and make positive contributions to an ever-expanding community. People from diverse backgrounds and skill levels are encouraged to take part.

    Hacktoberfest is an inclusive event open to everyone in our global community! Pull requests can be submitted to any GitHub or GitLab-hosted repository or project. You can sign up anytime between October 1 and October 31, 2023.

    🤔 Why Should I Contribute?

    Hacktoberfest has a straightforward goal: to promote open source and reward those who contribute to it.

    However, it’s not just about the T-shirts and stickers; it’s about supporting and celebrating open source while giving back to the community. If you’ve never contributed to open source before, now is the perfect time to start. Hacktoberfest offers a wide range of contribution opportunities, including plenty suitable for beginners.

    👨‍💻 What Can I Contribute?

    Hacktoberfest is inclusive and open to everyone, regardless of your background or skill level. Whether you’re a seasoned developer, a student learning to code, an event host, or a company of any size, you can help foster the growth of open source and make meaningful contributions to a thriving community.

    Contributions don’t have to be limited to code; they can include documentation updates or fixing typos.

    You can contribute to any open source project hosted on GitHub.com between October 1 and October 31, 2023. Look for issues labeled with “hacktoberfest” or “good-first-issue” on GitHub; these are typically beginner-friendly and easy to tackle.

    💻 Quickstart

    • Assign yourself an issue and fork this repo. For more information read [CONTRIBUTING].
    • Clone repo locally using git clone https://github.com/Coderich-Community/Hacktoberfest-2023
    • After cloning make sure you create a new branch by using git checkout -b my-branch
    • Start making edits in the newly created git branch. Firstly, add your name in the file
    • Add the modified/created files to the staging using git add .
    • Commit the changes made into the checked out branch using git commit -m "commit message"
    • Push the changes using git push origin my-branch

    ✨ Contributing

    By contributing to this repository, you adhere to the rules in our Here are a few general instructions for people willing to develop onto the codebase.

    • Create issues to discuss your ideas with the maintainers

    Creating issues before starting to work on your pull request helps you stay on the right track. Discuss your proposal well with the current maintainers.

    • Keep the code clean

    Follow the code formatting standards of the repository by referring to existing source files.

    • Comments are the best

    Make it clear what hacks you’ve used to keep this website afloat. Your work needs to be understood first, before getting appreciated.

    • Keep the Contributors section up-to-date

    To display your contributions to visitors and future contributors.


    Thanks To Our Valuable Contributors 👾 🤝

    Visit original content creator repository
  • hbox


    license Release Version PRs Welcome

    We have renamed the repositiry from XLearning to hbox.

    if you have a local clone of the repository, please update your remote URL:

    git remote set-url origin https://github.com/Qihoo360/hbox.git

    Hbox is a convenient and efficient scheduling platform combined with the big data and artificial intelligence, support for a variety of machine learning, deep learning frameworks. Hbox is running on the Hadoop Yarn and has integrated deep learning frameworks such as Tensornet, TensorFlow, MXNet, Caffe, Theano, PyTorch, Keras, XGBoost,horovod, openmpi, tensor2tensor. support GPU resource schedule, run in docker and restful api management interface. Hbox has the satisfactory scalability and compatibility.

    中文文档

    Architecture

    architecture
    There are three essential components in Hbox:

    • Client: start and get the state of the application.
    • ApplicationMaster(AM): the role for the internal schedule and lifecycle manager, including the input data distribution and containers management.
    • Container: the actual executor of the application to start the progress of Worker or PS(Parameter Server), monitor and report the status of the progress to AM, and save the output, especially start the TensorBoard service for TensorFlow application.

    Functions

    1 Support Multiple Deep Learning Frameworks

    Besides the distributed mode of TensorFlow and MXNet frameworks, Hbox supports the standalone mode of all deep learning frameworks such as Caffe, Theano, PyTorch. Moreover, Hbox allows the custom versions and multi-version of frameworks flexibly.

    2 Unified Data Management Based On HDFS

    Training data and model result save to HDFS(support S3). Hbox is enable to specify the input strategy for the input data --input by setting the --input-strategy parameter or hbox.input.strategy configuration. Hbox support three ways to read the HDFS input data:

    • Download: AM traverses all files under the specified HDFS path and distributes data to workers in files. Each worker download files from the remote to local.
    • Placeholder: The difference with Download mode is that AM send the related HDFS file list to workers. The process in worker read the data from HDFS directly.
    • InputFormat: Integrated the InputFormat function of MapReduce, Hbox allows the user to specify any of the implementation of InputFormat for the input data. AM splits the input data and assigns fragments to the different workers. Each worker passes the assigned fragments through the pipeline to the execution progress.

    Similar with the read strategy, Hbox allows to specify the output strategy for the output data --output by setting the --output-strategy parameter or hbox.output.strategy configuration. There are two kinds of result output modes:

    • Upload: After the program finished, each worker upload the local directory of the output to specified HDFS path directly. The button, “Saved Model”, on the web interface allows user to upload the intermediate result to remote during the execution.
    • OutputFormat: Integrated the OutputFormat function of MapReduce, Hbox allows the user to specify any of the implementation of OutputFormat for saving the result to HDFS.

    More detail see data management

    3 Visualization Display

    The application interface can be divided into four parts:

    • All Containers:display the container list and corresponding information, including the container host, container role, current state of container, start time, finish time, current progress.
    • View TensorBoard:If set to start the service of TensorBoard when the type of application is TensorFlow, provide the link to enter the TensorBoard for real-time view.
    • Save Model:If the application has the output, user can upload the intermediate output to specified HDFS path during the execution of the application through the button of “Save Model”. After the upload finished, display the list of the intermediate saved path.
    • Worker Metrix:display the resource usage information metrics of each worker.
      As shown below:

    yarn1

    4 Compatible With The Code At Native Frameworks

    Except the automatic construction of the ClusterSpec at the distributed mode TensorFlow framework, the program at standalone mode TensorFlow and other deep learning frameworks can be executed at Hbox directly.

    Compilation & Deployment Instructions

    1 Compilation Environment Requirements

    • jdk >= 1.8
    • Maven >= 3.6.3

    2 Compilation Method

    Run the following command in the root directory of the source code:

    ./mvnw package

    After compiling, a distribution package named hbox-1.1-dist.tar.gz will be generated under core/target in the root directory. Unpacking the distribution package, the following subdirectories will be generated under the root directory:

    • bin: scripts for managing application jobs
    • sbin: scripts for history service
    • lib: dependencies jars
    • libexec: common scripts and hbox-site.xml configuration examples
    • hbox-*.jar: HBox jars

    To setup configurations, user need to set HBOX_CONF_DIR to a folder containing a valid hbox-site.xml, or link this folder to $HBOX_HOME/conf.

    3 Deployment Environment Requirements

    • CentOS 7.2
    • Java >= 1.8
    • Hadoop = 2.6 — 3.2 (GPU requires 3.1+)
    • [optional] Dependent environment for deep learning frameworks at the cluster nodes, such as TensorFlow, numpy, Caffe.

    4 Hbox Client Deployment Guide

    Under the “conf” directory of the unpacking distribution package “$HBOX_HOME”, configure the related files:

    • hbox-env.sh: set the environment variables, such as:

      • JAVA_HOME
      • HADOOP_CONF_DIR
    • hbox-site.xml: configure related properties. Note that the properties associated with the history service needs to be consistent with what has configured when the history service started.For more details, please see the Configuration part。

    • log4j.properties:configure the log level

    5 Start Method of Hbox History Service [Optional]

    • run $HBOX_HOME/sbin/start-history-server.sh.

    Quick Start

    Use $HBOX_HOME/bin/hbox-submit to submit the application to cluster in the Hbox client. Here are the submit example for the TensorFlow application.

    1 upload data to hdfs

    upload the “data” directory under the root of unpacking distribution package to HDFS

    cd $HBOX_HOME  
    hadoop fs -put data /tmp/ 
    

    2 submit

    cd $HBOX_HOME/examples/tensorflow
    $HBOX_HOME/bin/hbox-submit \
       --app-type "tensorflow" \
       --app-name "tf-demo" \
       --input /tmp/data/tensorflow#data \
       --output /tmp/tensorflow_model#model \
       --files demo.py,dataDeal.py \
       --worker-memory 10G \
       --worker-num 2 \
       --worker-cores 3 \
       --ps-memory 1G \
       --ps-num 1 \
       --ps-cores 2 \
       --queue default \
       python demo.py --data_path=./data --save_path=./model --log_dir=./eventLog --training_epochs=10
    

    The meaning of the parameters are as follows:

    Property Name Meaning
    app-name application name as “tf-demo”
    app-type application type as “tensorflow”
    input input file, HDFS path is “/tmp/data/tensorflow” related to local dir “./data”
    output output file,HDFS path is “/tmp/tensorflow_model” related to local dir “./model”
    files application program and required local files, including demo.py, dataDeal.py
    worker-memory amount of memory to use for the worker process is 10GB
    worker-num number of worker containers to use for the application is 2
    worker-cores number of cores to use for the worker process is 3
    ps-memory amount of memory to use for the ps process is 1GB
    ps-num number of ps containers to use for the application is 1
    ps-cores number of cores to use for the ps process is 2
    queue the queue that application submit to

    For more details, set the Submit Parameter part。

    FAQ

    Hbox FAQ

    Authors

    Hbox is designed, authored, reviewed and tested by the team at the github:

    @Yuance Li, @Wen OuYang, @Runying Jia, @YuHan Jia, @Lei Wang

    Contact us

    qq

    Visit original content creator repository
  • customized-linkedin-to-jsonresume

    Customized LinkedIn Profile to JSON Resume Browser Tool

    🖼️ This is a slightly tweaked version of the LinkedIn to JSON Resume Chrome Extension. That project is outdated because it isn’t using the latest version of JSON Schema. Furthermore, I have customized that schema myself, so I have to base this Chrome extension off of my own schema.

    Build

    1. npm install
    2. Make a code change and then run npm run build-browserext, which will generate files in ./build-browserext.
    3. npm run package-browserext will side-load the build as a ZIP in webstore-zips directory.
    4. If you want to do something else besides side-loading, read the original README.

    Usage

    For local use:

    1. npm run package-browserext will side-load the build as a ZIP in webstore-zips directory.
    2. In Chrome, go to chrome://extensions then drag-n-drop the ZIP onto the browser. Note that developer mode must be turned on.
    3. Go to your LinkedIn profile, i.e. www.linkedin.com/in/anthonydellavecchia and click on LinkedIn Profile to JSON button.
    4. After a second or two, JSON will be generated. Copy this, as it is a raw/pre-transformation version.
    5. Note that in the Chrome Extension, you can select either the custom version of the JSON schema that I created, or the last stable build from v0.0.16 (mine is based on v1.0.0).

    Design

    • browser-ext/popup.html holds the HTML for the Chrome Extension.
    • jsonresume.scheama.latest.ts is the latest schema from JSON Resume Schema (v1.0.0).
    • jsonresume.scheama.stable.ts is the stable but very outdated schema from JSON Resume Schema (v0.0.16).
    • src/main.js holds most of the JavaScript to get and transform data from LinkedIn.
    • src/templates.js holds the templates for the schema.

    Click to expand README.md of the source repository!

    An extremely easy-to-use browser extension for exporting your full LinkedIn Profile to a JSON Resume file or string.

    Demo GIF

    Usage / Installation Options:

    There are (or were) a few different options for how to use this:

    • Fast and simple: Chrome Extension – Get it here
      • Feel free to install, use, and then immediately uninstall if you just need a single export
      • No data is collected
    • [Deprecated] (at least for now): Bookmarklet
      • This was originally how this tool worked, but had to be retired as a valid method when LinkedIn added a stricter CSP that prevented it from working
      • Code to generate the bookmarklet is still in this repo if LI ever loosens the CSP

    Schema Versions

    This tool supports multiple version of the JSON Resume Schema specification for export, which you can easily swap between in the dropdown selector! ✨

    “Which schema version should I use?”

    If you are unsure, you should probably just stick with “stable”, which is the default. It should have the most widespread support across the largest number of platforms.

    Support for Multilingual Profiles

    LinkedIn has a unique feature that allows you to create different versions of your profile for different languages, rather than relying on limited translation of certain fields.

    For example, if you are bilingual in both English and German, you could create one version of your profile for each language, and then viewers would automatically see the correct one depending on where they live and their language settings.

    I’ve implemented support (starting with v1.0.0) for multilingual profile export through a dropdown selector:

    Export Language Selector

    The dropdown should automatically get populated with the languages that the profile you are currently viewing supports, in addition to your own preferred viewing language in the #1 spot. You should be able to switch between languages in the dropdown and click the export button to get a JSON Resume export with your selected language.

    Note: LinkedIn offers language choices through a Locale string, which is a combination of country (ISO-3166) and language (ISO-639). I do not make decisions as to what languages are supported.

    This feature is the part of this extension most likely to break in the future; LI has some serious quirks around multilingual profiles – see my notes for details.

    Export Options

    There are several main buttons in the browser extension, with different effects. You can hover over each button to see the alt text describing what they do, or read below:

    • LinkedIn Profile to JSON: Converts the profile to the JSON Resume format, and then displays it in a popup modal for easy copying and pasting
    • Download JSON Resume Export: Same as above, but prompts you to download the result as an actual .json file.
    • Download vCard File: Export and download the profile as a Virtual Contact File (.vcf) (aka vCard)
      • There are some caveats with this format; see below

    vCard Limitations and Caveats

    • Partial birthdate (aka BDAY) values (e.g. where the profile has a month and day, but has not opted to share their birth year), are only supported in v4 (RFC-6350) and above. This extension currently only supports v3, so in these situations the tool will simply omit the BDAY field from the export
      • See #32 for details
    • The LinkedIn display photo (included in vCard) served by LI is a temporary URL, with a fixed expiration date set by LinkedIn. From observations, this is often set months into the future, but could still be problematic for address book clients that don’t cache images. To work around this, I’m converting it to a base64 string; this should work with most vCard clients, but also increases the vCard file size considerably.

    Chrome Side-loading Instructions

    Instead of installing from the Chrome Webstore, you might might want to “side-load” a ZIP build for either local development, or to try out a new release that has not yet made it through the Chrome review process. Here are the instructions for doing so:

    1. Find the ZIP you want to load
      • If you want to side-load the latest version, you can download a ZIP from the releases tab
      • If you want to side-load a local build, use npm run package-browserext to create a ZIP
    2. Go to Chrome’s extension setting page (chrome://extensions)
    3. Turn on developer mode (upper right toggle switch)
    4. Drag the downloaded zip to the browser to let it install
    5. Test it out, then uninstall

    You can also unpack the ZIP and load it as “unpacked”.

    Troubleshooting

    When in doubt, refresh the profile page before using this tool.

    Troubleshooting – Debug Log

    If I’m trying to assist you in solving an issue with this tool, I might have you share some debug info. Currently, the easiest way to do this is to use the Chrome developer’s console:

    1. Append ?li2jr_debug=true to the end of the URL of the profile you are on
    2. Open Chrome dev tools, and specifically, the console (instructions)
    3. Run the extension (try to export the profile), and then look for red messages that show up in the console (these are errors, as opposed to warnings or info logs).
      • You can filter to just error messages, in the filter dropdown above the console.

    Updates:

    Update History (Click to Show / Hide)
    Date Release Notes
    2/27/2021 2.1.2 Fix: Multiple issues around work history / experience; missing titles, ordering, etc. Overhauled approach to extracting work entries.
    12/19/2020 2.1.1 Fix: Ordering of work history with new API endpoint (#38)
    12/7/2020 2.1.0 Fix: Issue with multilingual profile, when exporting your own profile with a different locale than your profile’s default. (#37)
    11/12/2020 2.0.0 Support for multiple schema versions ✨ (#34)
    11/8/2020 1.5.1 Fix: Omit partial BDAY export in vCard (#32)
    10/22/2020 1.5.0 Fix: Incorrect birthday month in exported vCards (off by one)
    Fix: Better pattern for extracting profile ID from URL, fixes extracting from virtual sub-pages of profile (e.g. /detail/contact-info), or with query or hash strings at the end.
    7/7/2020 1.4.2 Fix: For work positions, if fetched via profilePositionGroups, LI ordering (the way it looks on your profile) was not being preserved.
    7/31/2020 1.4.1 Fix: In some cases, wrong profileUrnId was extracted from current profile, which led to work history API call being ran against a different profile (e.g. from “recommended section”, or something like that).
    7/21/2020 1.4.0 Fix: For vCard exports, Previous profile was getting grabbed after SPA navigation between profiles.
    7/6/2020 1.3.0 Fix: Incomplete work position entries for some users; LI was limiting the amount of pre-fetched data. Had to implement request paging to fix.
    Also refactored a lot of code, improved result caching, and other tweaks.
    6/18/2020 1.2.0 Fix / Improve VCard export feature.
    6/5/2020 1.1.0 New feature: vCard export, which you can import into Outlook / Google Contacts / etc.
    5/31/2020 1.0.0 Brought output up to par with “spec”, integrated schemas as TS, added support for multilingual profiles, overhauled JSDoc types.
    Definitely a breaking change, since the output has changed to mirror schema more closely (biggest change is website in several spots has become url)
    5/9/2020 0.0.9 Fixed “references”, added certificates (behind setting), and formatting tweaks
    4/4/2020 0.0.8 Added version string display to popup
    4/4/2020 0.0.7 Fixed and improved contact info collection (phone, Twitter, and email). Miscellaneous other tweaks.
    10/22/2019 0.0.6 Updated recommendation querySelector after LI changed DOM. Thanks again, @ lucbpz.
    10/19/2019 0.0.5 Updated LI date parser to produce date string compliant with JSONResume Schema (padded). Thanks @ lucbpz.
    9/12/2019 0.0.4 Updated Chrome webstore stuff to avoid LI IP usage (Google took down extension page due to complaint). Updated actual scraper code to grab full list of skills vs just highlighted.
    8/3/2019 NA Rewrote this tool as a browser extension instead of a bookmarklet to get around the CSP issue. Seems to work great!
    7/22/2019 NA ALERT: This bookmarklet is currently broken, thanks to LinkedIn adding a new restrictive CSP (Content Security Policy) header to the site. I’ve opened an issue to discuss this, and both short-term (requires using the console) and long-term (browser extension) solutions.
    6/21/2019 0.0.3 I saw the bookmarklet was broken depending on how you came to the profile page, so I refactored a bunch of code and found a much better way to pull the data. Should be much more reliable!

    What is JSON Resume?

    “JSON Resume” is an open-source standard / schema, currently gaining in adoption, that standardizes the content of a resume into a shared underlying structure that others can use in automated resume formatters, parsers, etc. Read more about it here, or on GitHub.

    What is this tool?

    I made this because I wanted a way to quickly generate a JSON Resume export from my LinkedIn profile, and got frustrated with how locked down the LinkedIn APIs are and how slow it is to request your data export (up to 72 hours). “Install” the tool to your browser, then click to run it while looking at a LinkedIn profile (preferably your own), and my code will grab the various pieces of information off the page and then show a popup with the full JSON resume export that you can copy and paste to wherever you would like.


    Development

    With the rewrite to a browser extension, I actually configured the build scripts to be able to still create a bookmarklet from the same codebase, in case the bookmarklet ever becomes a viable option again.

    Building the browser extension

    npm run build-browserext will transpile and copy all the right files to ./build-browserext, which you can then side-load into your browser. If you want to produce a single ZIP archive for the extension, npm run package-browserext will do that.

    Use build-browserext-debug for a source-map debug version. To get more console output, append li2jr_debug=true to the query string of the LI profile you are using the tool with.

    Building the bookmarklet version

    Currently, the build process looks like this:

    • src/main.js -> (webpack + babel) -> build/main.js -> mrcoles/bookmarklet -> build/bookmarklet_export.js -> build/install-page.html
      • The bookmark can then be dragged to your bookmarks from the final build/install-page.html

    All of the above should happen automatically when you do npm run build-bookmarklet.

    If this ever garners enough interest and needs to be updated, I will probably want to re-write it with TypeScript to make it more maintainable.

    LinkedIn Documentation

    For understanding some peculiarities of the LI API, see LinkedIn-Notes.md.

    Debugging

    Debugging the extension is a little cumbersome, because of the way Chrome sandboxes extension scripts and how code has to be injected. An alternative to setting breakpoints in the extension code itself, is to copy the output of /build/main.js and run it via the console.

    li2jr = new LinkedinToResumeJson(true, true);
    li2jr.parseAndShowOutput();

    Even if you have the repo inside of a local static server, you can’t inject it via a script tag or fetch & eval, due to LI’s restrictive CSP.

    If you do want to find the actual injected code of the extension in Chrome dev tools, you should be able to find it under Sources -> Content Scripts -> top -> JSON Resume Exporter -> {main.js}

    Debugging Snippets

    Helpful snippets (subject to change; these rely heavily on internals):

    // Get main profileDB (after running extension)
    var profileRes = await liToJrInstance.getParsedProfile(true);
    var profileDb = await liToJrInstance.internals.buildDbFromLiSchema(profileRes.liResponse);

    DISCLAIMER:

    This tool is not affiliated with LinkedIn in any manner. Intended use is to export your own profile data, and you, as the user, are responsible for using it within the terms and services set out by LinkedIn. I am not responsible for any misuse, or repercussions of said misuse.

    Attribution:

    Icon for browser extension:

    Visit original content creator repository