Category: Blog

  • super-offers

    Super Offers

    Complete on-chain tasks and get a constant stream of income

    cover_image

    Description

    Repudiation (Denial of an act) is one of the major issues in creating offers or agreements with anyone online. It requires trust that a person would provide the money/service on completing the deal. SuperOffers solves this problem. The entire logic is handled on-chain without any DAO or third-party involved which makes it completely trust-free and transparent.

    The workflow involves two different types of users:

    1. Offerer
    2. Claimer

    The Offerer creates a super offer declaring the total bounty reward etc. Here comes the interesting part. Offer is not created by any off-chain logic. The offerer enters the contract address, contract abi and chooses a function signature and an expected value on calling the function. Multiple functions from multiple contracts can be added for the same offer. When any claimer satisfies this logic, he can apply for a claim and the smart contract starts streaming him tokens until the Offer timeline ends. This way the entire task completion and money streaming is done completely on-chain with no chance of denying an act or betrayal.

    I am looking forward for the Streaming Distributions (Coming soon) feature in SuperFluid which would be the perfectly suited for this usecase. It would also save up some of my complex smart contract logic πŸ˜….

    Possible use cases for Super Offers can be:

    1. Hold 2 Bored Ape NFTs
    2. Hold 25 LINK Tokens.
    3. Make 10 proposals in CompoundDAO
    4. Hold Token #1200 in CyperPunks NFT collection.
      The possibilities are endless..

    Challenges

    I struggled with developing the logic for my solidity smart contracts. I ran into a lot of issues when making low level calls to smart contracts and then when debugging and verifying the execution of the logic.
    I have developed dApps with theGraph but this time I ran into a lot of issues in depolying my subgraph.
    All other challenges were manageable πŸ™‚

    Visit original content creator repository

  • PLP-Order-Management-System

    Salesforce App

    This guide helps Salesforce developers who are new to Visual Studio Code go from zero to a deployed app using Salesforce Extensions for VS Code and Salesforce CLI.

    Part 1: Choosing a Development Model

    There are two types of developer processes or models supported in Salesforce Extensions for VS Code and Salesforce CLI. These models are explained below. Each model offers pros and cons and is fully supported.

    Package Development Model

    The package development model allows you to create self-contained applications or libraries that are deployed to your org as a single package. These packages are typically developed against source-tracked orgs called scratch orgs. This development model is geared toward a more modern type of software development process that uses org source tracking, source control, and continuous integration and deployment.

    If you are starting a new project, we recommend that you consider the package development model. To start developing with this model in Visual Studio Code, see Package Development Model with VS Code. For details about the model, see the Package Development Model Trailhead module.

    If you are developing against scratch orgs, use the command SFDX: Create Project (VS Code) or sfdx force:project:create (Salesforce CLI) to create your project. If you used another command, you might want to start over with that command.

    When working with source-tracked orgs, use the commands SFDX: Push Source to Org (VS Code) or sfdx force:source:push (Salesforce CLI) and SFDX: Pull Source from Org (VS Code) or sfdx force:source:pull (Salesforce CLI). Do not use the Retrieve and Deploy commands with scratch orgs.

    Org Development Model

    The org development model allows you to connect directly to a non-source-tracked org (sandbox, Developer Edition (DE) org, Trailhead Playground, or even a production org) to retrieve and deploy code directly. This model is similar to the type of development you have done in the past using tools such as Force.com IDE or MavensMate.

    To start developing with this model in Visual Studio Code, see Org Development Model with VS Code. For details about the model, see the Org Development Model Trailhead module.

    If you are developing against non-source-tracked orgs, use the command SFDX: Create Project with Manifest (VS Code) or sfdx force:project:create --manifest (Salesforce CLI) to create your project. If you used another command, you might want to start over with this command to create a Salesforce DX project.

    When working with non-source-tracked orgs, use the commands SFDX: Deploy Source to Org (VS Code) or sfdx force:source:deploy (Salesforce CLI) and SFDX: Retrieve Source from Org (VS Code) or sfdx force:source:retrieve (Salesforce CLI). The Push and Pull commands work only on orgs with source tracking (scratch orgs).

    The sfdx-project.json File

    The sfdx-project.json file contains useful configuration information for your project. See Salesforce DX Project Configuration in the Salesforce DX Developer Guide for details about this file.

    The most important parts of this file for getting started are the sfdcLoginUrl and packageDirectories properties.

    The sfdcLoginUrl specifies the default login URL to use when authorizing an org.

    The packageDirectories filepath tells VS Code and Salesforce CLI where the metadata files for your project are stored. You need at least one package directory set in your file. The default setting is shown below. If you set the value of the packageDirectories property called path to force-app, by default your metadata goes in the force-app directory. If you want to change that directory to something like src, simply change the path value and make sure the directory you’re pointing to exists.

    "packageDirectories" : [
        {
          "path": "force-app",
          "default": true
        }
    ]

    Part 2: Working with Source

    For details about developing against scratch orgs, see the Package Development Model module on Trailhead or Package Development Model with VS Code.

    For details about developing against orgs that don’t have source tracking, see the Org Development Model module on Trailhead or Org Development Model with VS Code.

    Part 3: Deploying to Production

    Don’t deploy your code to production directly from Visual Studio Code. The deploy and retrieve commands do not support transactional operations, which means that a deployment can fail in a partial state. Also, the deploy and retrieve commands don’t run the tests needed for production deployments. The push and pull commands are disabled for orgs that don’t have source tracking, including production orgs.

    Deploy your changes to production using packaging or by converting your source into metadata format and using the metadata deploy command.

    Visit original content creator repository

  • RLCodebase

    RLCodebase

    RLCodebase is a modularized codebase for deep reinforcement learning algorithms based on PyTorch. This repo aims to provide an user-friendly reinforcement learning codebase for beginners to get started and for researchers to try their ideas quickly and efficiently.

    For now, it has implemented DQN(PER), A2C, PPO, DDPG, TD3 and SAC algorithms, and has been tested on Atari, Procgen, Mujoco, PyBullet and DMControl Suite environments.

    Introduction

    The design of RLCodebase is shown as below.

    RLCodebase

    • Config: Config is a class that contains parameters for reinforcement learning algorithms such as discount factor, learning rate, etc. and general configurations such as random seed, saving path, etc.
    • Trainer: Trainer is a wrapped class that controls the workflow of reinforcement learning training. It manages the interactions between submodules (Agent, Env, memory).
    • Agent: Agent chooses actions to take given states. It also defines how to update the model given a batch of data.
    • Model: Model gathers all neural networks to train.
    • Env: Env is a vectorized gym environment.
    • Memory: Memory stores experiences utilized for RL training.

    Installtion

    All required packages have been included in setup.py and requirements.txt. Mujoco is needed for mujoco_py and dm control suite. To support mujoco_py and dm control, please refer to https://github.com/openai/mujoco-py and https://github.com/deepmind/dm_control. For mujoco_py 2.1.2.14 and dm_control (commit fe44496), you may download mujoco like below

    cd ~  
    mkdir .mujoco  
    cd .mujoco  
    # for mujoco_py
    wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz
    tar -xf mujoco210-linux-x86_64.tar.gz  
    # for dm control
    wget https://github.com/deepmind/mujoco/releases/download/2.1.1/mujoco-2.1.1-linux-x86_64.tar.gz
    tar -xf mujoco-2.1.1-linux-x86_64.tar.gz
    

    To install RLCodebase, follow

    # create virtual env
    conda create -n rlcodebase python=3.8
    conda activate rlcodebase
    
    # install rlcodebase
    git clone git@github.com:KarlXing/RLCodebase.git RLCodebase
    cd RLCodebase
    pip install -e .
    pip install -r requirements.txt
    
    # try it
    python examples/example_ppo.py
    

    Supported Algorithms

    • DQN (PER)
    • A2C
    • PPO
    • DDPG
    • TD3
    • SAC

    Supported Environments (tested)

    • Atari
    • Mujoco
    • PyBullet
    • Procgen

    Results

    1. PPO & A2C In Atari Games

    2. DDPG & TD3 & SAC In PyBullet Environments

    3. DQN & DQN+PER In PongNoFrameskip-v4

    4. Procgen

    Citation

    Please use the bibtex below if you want to cite this repository in your publications:

    @misc{rlcodebase,
      author = {Jinwei Xing},
      title = {RLCodebase: PyTorch Codebase For Deep Reinforcement Learning Algorithms},
      year = {2020},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://github.com/KarlXing/RLCodebase}},
    }
    

    References for implementation and design

    RLCodebase is inspired by resources below.

    Visit original content creator repository
  • amtui

    AMTUI – Alertmanager Terminal User Interface

    Go version Release Go Report Card License Discord

    
     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—
    β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
    β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
    β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ•β• β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘   β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘
    β•šβ•β•  β•šβ•β•β•šβ•β•     β•šβ•β•   β•šβ•β•    β•šβ•β•β•β•β•β• β•šβ•β•
                                 
    

    AMTUI is a terminal-based user interface (TUI) application that allows you to interact with Prometheus Alertmanager using your terminal. It provides a convenient way to monitor alerts, view silences, and check the status of Alertmanager instances right from your command line.

    AMTUI Demo

    Features

    • View active alerts with details such as severity, alert name, and description.
    • Browse and review existing silences in Alertmanager.
    • Filter alerts and silences using matchers.
    • Check the general status of your Alertmanager instance.

    Installation

    Using Homebrew

    You can install AMTUI using the Homebrew package manager:

    brew tap pehlicd/tap
    brew install amtui

    Using go install

    You can install AMTUI using the go install command:

    go install github.com/pehlicd/amtui@latest

    From Releases

    You can download the latest release of AMTUI from the GitHub releases page.

    From Source

    To use AMTUI, you’ll need to have Go installed on your system. Then, you can install AMTUI using the following steps:

    1. Clone the repository:
    git clone https://github.com/pehlicd/amtui.git
    1. Navigate to the project directory:
    cd amtui
    1. Build the application:
    go build
    1. Run the application:
    ./amtui

    Usage

    Once you’ve launched AMTUI, you can navigate through different sections using the following keyboard shortcuts:

    • Press 1 to view and interact with active alerts.
    • Press 2 to see existing silences.
    • Press 3 to check the general status of your Alertmanager instance.

    Keyboard Shortcuts

    • q: Quit the application.
    • l: Focus on the preview list.
    • h: Focus on the sidebar list.
    • j: Move focus to the preview.
    • k: Move focus to the preview list.
    • CTRL + F: Focus on the filter input.
    • ESC: Return focus to the sidebar list.

    Configuration

    AMTUI uses a configuration file to connect to your Alertmanager instance. By default, the application will look for a configuration file at ~/.amtui.yaml. If the configuration file doesn’t exist, AMTUI will guide you through creating it with the necessary connection details.

    You can also specify connection details using command-line flags:

    amtui --host 127.0.0.1 --port 9093 --scheme http

    AMTUI also supports basic authentication. You can specify the username and password using the --username and --password flags:

    amtui --host 127.0.0.1 --port 9093 --scheme http --username admin --password admin

    Dependencies

    AMTUI uses the following dependencies:

    • github.com/gdamore/tcell/v2: Terminal handling and screen painting.
    • github.com/prometheus/alertmanager/api/v2/client: Alertmanager API client.
    • github.com/rivo/tview: Terminal-based interactive viewer.
    • github.com/spf13/pflag: Flag parsing.
    • github.com/spf13/viper: Configuration management.

    Contributing

    If you’d like to contribute to AMTUI, feel free to submit pull requests or open issues on the GitHub repository. Your feedback and contributions are highly appreciated!

    License

    This project is licensed under the MIT License – see the LICENSE file for details.

    Stargazers over time

    Stargazers over time


    Developed by Furkan PehlivanProject Repository

    Visit original content creator repository
  • bloggen-net

    Bloggen.Net .NET Core Codacy Badge

    Dotnet core cross platform static blog generator based on markdown, YAML front matter and Handlebars.NET

    A working example of a blog generated by this is my own: https://codernr.github.io/

    To read more about this project, see this blog post: https://codernr.github.io/posts/this-post-is-generated-by-the-subject-of-this-post

    Features

    • Uses YAML front matter for metadata and markdown
    • Generates posts and static pages
    • Handles tags
    • Supports pagination of posts
    • Uses Handlebars language for templates

    Usage

    1. Download the latest release
    2. Extract the tar file
    3. Go to the extracted directory
    4. Run dotnet ./Bloggen.Net.dll -s <source directory path> -o <output directory path>

    Source directory structure

    The source directory has to follow a well defined structure (see details in next sections):

    source/
    β”œβ”€β”€ assets/
    β”‚   β”œβ”€β”€ img/
    β”‚   β”‚   β”œβ”€β”€ some_image.jpg
    β”‚   β”‚   └── ...
    β”‚   β”œβ”€β”€ js/
    β”‚   └── ...
    β”œβ”€β”€ pages
    β”‚   β”œβ”€β”€ some-page.md
    β”‚   β”œβ”€β”€ other-page.md
    β”‚   └── ...
    β”œβ”€β”€ posts
    β”‚   β”œβ”€β”€ my-first-post.md
    β”‚   β”œβ”€β”€ my-second-post.md
    β”‚   └── ...
    β”œβ”€β”€ templates
    β”‚   β”œβ”€β”€ my-template-name/
    β”‚   β”‚   β”œβ”€β”€ assets/
    β”‚   β”‚   β”‚   β”œβ”€β”€ img/
    β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ template_img.jpg
    β”‚   β”‚   β”‚   β”‚   └── ...
    β”‚   β”‚   β”‚   β”œβ”€β”€ css/
    β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ template_style.css
    β”‚   β”‚   β”‚   β”‚   └── ...
    β”‚   β”‚   β”‚   β”œβ”€β”€ js/
    β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ template_scripts.js
    β”‚   β”‚   β”‚   β”‚   └── ...
    β”‚   β”‚   β”‚   └── ...
    β”‚   β”‚   β”œβ”€β”€ layouts/
    β”‚   β”‚   β”‚   β”œβ”€β”€ list.hbs
    β”‚   β”‚   β”‚   β”œβ”€β”€ page.hbs
    β”‚   β”‚   β”‚   β”œβ”€β”€ post.hbs
    β”‚   β”‚   β”‚   └── tag.hbs
    β”‚   β”‚   β”œβ”€β”€ partials/
    β”‚   β”‚   β”‚   β”œβ”€β”€ any_partial.hbs
    β”‚   β”‚   β”‚   └── ...
    β”‚   β”‚   └── index.hbs
    β”‚   └── ...
    └── config.yml
    

    Config

    The source root directory contains a config.yml file that represents the site configuration. Example:

    # The title of the site; default: empty string
    title: 'Site title'
    
    # A subheading/slogan; default: empty string
    subHeading: 'A secondary heading'
    
    # source/templates directory can contain multiple templates,
    # the one with this name is rendered; default: empty string
    template: 'my-template-name'
    
    # Number of posts that is displayed on one page; default: 10
    postsPerPage: 10
    
    # Root url of site; default: "https://github.com/"
    url: 'https://my-page.com'
    
    # Date format string on the site; default: 'yyyy-MM-dd'
    dateFormat: 'yyyy-MM-dd'
    
    # Site-wide meta properties, e.g. 'og:title', etc.
    metaProperties:
      - property: 'og:image'
        content: '/assets/my-facebook-share-image.jpg'
    
    # Google analytics tracker code, could be used in template for rendering google analytics script
    # optional
    gTag: 'UA-12345678-1'
    
    # Disqus identifier, could be used in template for rendering disqus script
    disqusId: 'my-disqus-id'

    Pages

    Files in the pages directory are rendered in the output root directory directly so they can be reached on the /pagename path. SReserved therefore are not allowed for pages:

    • assets
    • pages
    • posts
    • tags
    • index

    Page metadata is passed in the file’s front matter header and content is parsed as markdown. Currently pages only have title metadata:

    ---
    # Front matter header is between --- marks
    title: 'About'
    ---
    
    # Here comes the markdown content
    
    ...
    

    Posts

    Posts are generated based on their filename and available on /posts/post-file-name path so it is recommended to name the original file seo-friendly. The format is the same front matter + markdown as with pages. Example:

    ---
    # Front matter header is between --- marks
    
    # Title of the post
    title: 'My first blog post'
    
    # Excerpt that can be used on list page for example
    excerpt: 'A longer description of my post'
    
    # Date of creation (list page orders by this)
    createdAt: '2020-04-01'
    
    # Author
    postedBy: 'codernr'
    
    # Tags
    tags: [ 'blog', 'C#', 'Github' ]
    
    # Post specific meta properties, these are merged with site meta properties
    # Properties defined here take precedence over site properties so an 'og:image' defined here overrides the default one
    metaProperties:
      - property: 'og:image'
        content: '/assets/img/my-custom-share-image.jpg'
    ---
    
    # Here comes the markdown content
    
    ...
    

    Tags

    Tags are collected from posts’ metadata and grouped by slug. Special characters are stripped and hyphens are added when generating slugs so C# and C becomes the same, this should be kept in mind when tagging posts. Tag pages are generated under /tag/tagname path where a list of posts using that tag is passed to the template.

    Post pagination

    Posts are ordered descending by creation date and paginated as defined in config. The first page is always rendered as index.html and the other pages are rendered under /pages/{page-number} path.

    Assets

    There are two sources of assets:

    • site-wide assets in source/assets directory
    • template specific assets in source/templates/<selected-template>/assets

    These folders are merged during generation to the /assets path which means that a template asset with the same name as a site asset overwrites it.

    Templates

    There is an example template that is used by my blog, forked from Start bootstrap, see it here.

    Bloggen.Net uses Handlebars.Net as a templating engine. The pages are rendered from a main template file index.hbs and embedded layout files that are specific for the type of content.

    There is a site variable that is available within all the templates called site with the following structure:

    {
      config: {
        title: '...',
        subheading: '...',
        template: '...',
        postsperpage: '...'
        // ... and al the config values
      },
      tags: [ // all the tags ordered by name
        { /* see details at tag layout description */ }
      ],
      pages: [
        { /* see details at list layout description */ }
      ]
    }

    List layout (layouts/list.hbs)

    This template is rendered with the paginated post objects. Pagination objects are available in {{site.pages}} array in the template. Structure:

    {
      pageNumber: 2,
      items: [ { /* see structure at post layout */ }]
      url: '/pages/2' // or "https://github.com/" if on the first page that is index.html
      totalcount: 3 // number of pages
      previous: { /* the previous pagination object */ }
      next: { /* the next pagination object */ }
    }

    Page layout (layouts/page.hbs)

    Pages are rendered with this template. Page data is available in the {{data}} variable in the template. Structure is the same as described in posts section.

    Page content is available in the {{content}} variable. To render markdown as html see helpers.

    Post layout (layouts/post.hbs)

    The same as pages. Metadata is in the {{data}} variable, content is in {{content}}. See metadata in posts section.

    Tag layout (layouts/tag.hbs)

    This template is used to render one specific tag’s post list. Tag metadata is available in {{data}} variable:

    {
      name: 'tag name',
      postreferences: [ { /* see details at posts description */ } ]
      url: '/tags/tag-name'
    }

    Template helpers

    There are two registered helpers in Bloggen.Net by default:

    • Date helper that renders date objects with the format string from config; usage: {{date datevariable}}
    • Html helper that renders markdown content as html; usage: {{html content}}

    Automation

    Since this is a command line tool, it can be used in an automated setup. My own blog is generated this way; a source repository is set up to trigger the generation of files and a git push to my user site repository. For details, see the workflow file.

    Visit original content creator repository
  • substrate-demo

    Basic Substrate Blockchain

    A UTXO (Unspent Transaction Output) chain implementation on Substrate.

    Installation

    1. Install or update Rust

    curl https://sh.rustup.rs -sSf | sh
    
    rustup update nightly
    rustup target add 
    rustup update stable

    UI Demo

    In this UI demo, you will interact with the UTXO blockchain via the Polkadot UI.

    The following example takes you through a scenario where:

    • Alice already owns a UTXO of value 100 upon genesis
    • Alice sends Bob a UTXO with value 50, tipping the remainder to validators
    1. Compile and build a release node
    cargo +nightly build --release
    1. Start a node. The --dev flag will start a single mining node, and the --tmp flag will start it in a new temporary directory.
    ./target/release/utxo-workshop --dev --tmp
    1. In the console note the helper printouts. In particular, notice the default account Alice already has 100 UTXO within the genesis block.

    2. Open Polkadot JS making sure the client is connected to your local node by going to Settings > General and selecting Local Node in the remote node dropdown.

    3. Declare custom datatypes in PolkadotJS as the frontend cannot automatically detect this information. To do this, go to Settings > Developer tab and paste in the following JSON:

    {
      "Address": "AccountId",
      "LookupSource": "AccountId",
      "Value": "u128",
      "TransactionInput": {
        "outpoint": "Hash",
        "sigscript": "H512"
      },
      "TransactionOutput": {
        "value": "Value",
        "pubkey": "Hash"
      },
      "Transaction": {
        "inputs": "Vec<TransactionInput>",
        "outputs": "Vec<TransactionOutput>"
      },
      "Difficulty": "U256",
      "DifficultyAndTimestamp": {
        "difficulty": "Difficulty",
        "timestamp": "Moment"
      },
      "Public": "H256"
    }
    1. Confirm that Alice already has 100 UTXO at genesis. In Chain State > Storage, select utxo. Input the hash 0x76584168d10a20084082ed80ec71e2a783abbb8dd6eb9d4893b089228498e9ff. Click the + notation to query blockchain state.

      Notice that:

      • This UTXO has a value of 100
      • This UTXO belongs to Alice’s pubkey. You use the subkey tool to confirm that the pubkey indeed belongs to Alice
    2. Spend Alice’s UTXO, giving 50 to Bob. In the Extrinsics tab, invoke the spend function from the utxo pallet, using Alice as the transaction sender. Use the following input parameters:

      • outpoint: 0x76584168d10a20084082ed80ec71e2a783abbb8dd6eb9d4893b089228498e9ff
      • sigscript: 0x6ceab99702c60b111c12c2867679c5555c00dcd4d6ab40efa01e3a65083bfb6c6f5c1ed3356d7141ec61894153b8ba7fb413bf1e990ed99ff6dee5da1b24fd83
      • value: 50
      • pubkey: 0x8eaf04151687736326c9fea17e25fc5287613693c912909cb226aa4794f26a48

      Send as an unsigned transaction. With UTXO blockchains, the proof is already in the sigscript input.

    3. Verify that your transaction succeeded. In Chain State, look up the newly created UTXO hash: 0xdbc75ab8ee9b83dcbcea4695f9c42754d94e92c3c397d63b1bc627c2a2ef94e6 to verify that a new UTXO of 50, belonging to Bob, now exists! Also you can verify that Alice’s original UTXO has been spent and no longer exists in UtxoStore.

    Visit original content creator repository

  • bookstore

    πŸ“— Table of Contents

    πŸ“– Bookstore React Project

    The Bookstore React App is a single-page application that allows users to browse and purchase books. It is built using the React JavaScript library and features a navbar and footer that provide navigation throughout the app. Users can register and login to create and manage their accounts, and they can add and remove books from a shopping cart. A search bar allows users to find books by title, author, or genre. A list of books that are currently in stock is also available, and each book has a page where users can view more information, such as the book’s description, reviews, and price. Finally, users can view their past orders on an order history page.

    The project is built with React, JSX, CSS, and JavaScript. It is also deployed on Heroku, so you can try it out by visiting the live demo.

    The App is still under development, but it is a good example of how React can be used to build a dynamic and interactive web application. Some additional features that could be added to the app in the future include the ability to filter books by genre, price, or other criteria; the ability to add books to a wishlist; the ability to rate and review books; the ability to subscribe to email notifications about new books; the ability to purchase books in different currencies; and the ability to translate the app into different languages.

    πŸ›  Built With

    • React
    • JSX
    • CSS
    • Javascript ES6
    • Visual Studio Code
    • ESLint
    • Stylelint

    Key Features

    The key features of this project include the following.

    • A navbar and footer that provide navigation throughout the app.
    • A register and login form for users to create and manage their accounts.
    • A shopping cart where users can add and remove books.
    • A search bar that allows users to find books by title, author, or genre.
    • A list of books that are currently in stock.
    • A page for each book where users can view more information, such as the book’s description, reviews, and price.
    • An order history page where users can view their past orders.

    (back to top)

    πŸš€ Live Demo

    (back to top)

    πŸ’» Getting Started

    Get ready to explore the cosmos with these steps:

    Prerequisites

    Ensure you have:

    • A Web Browser such as Microsoft Edge or Google Chrome 🌐
    • Git πŸ™
    • A code editor such as Visual Studio Code πŸ‘¨β€πŸ’»

    Setup

    Use git clone to get your local copy of the project.

    git clone https://github.com/katarighe/bookstore-react.git

    Install

    Run npm install to set up the required packages.

    npm install

    Run Tests

    To run tests run the following command in your terminal

     npm test
    

    Usage

    Launch the app with the following command

      npm start
    

    (back to top)

    πŸ‘₯ Authors

    πŸ‘€ Mohamed Aden Ighe

    (back to top)

    πŸ”­ Future Features

    Here are some future features that could be added to the Bookstore React app in the future.

    • User authentication and authorization: This would allow users to create accounts, sign in and out, and have their own personal bookshelves.
    • Shopping cart: This would allow users to add books to their cart and checkout.
    • Payment processing: This would allow users to pay for their purchases with a credit card or other payment method.
    • Shipping and delivery: This would allow users to track the status of their orders and have their books shipped to them.
    • Reviews and ratings: This would allow users to leave reviews and ratings of books they have read.
    • Wishlist: This would allow users to save books they are interested in buying for later.
    • Personalization: This would allow the app to be customized to each user’s preferences, such as their favorite genres or authors.
    • Social features: This would allow users to connect with other users, share book recommendations, and discuss books.

    These are just a few ideas for future features that could be added to the Bookstore React app. The specific features that are added will depend on the needs and wants of the users.

    (back to top)

    🀝 Contributing

    Contributions, issues, and feature requests are welcome!

    Feel free to check the issues page.

    (back to top)

    ⭐️ Show your support

    Give a star⭐️ or a thumbs up πŸ‘ if you like this project! You can visit my GitHub profile for more of my projects.

    (back to top)

    πŸ™ Acknowledgments

    (back to top)

    πŸ“ License

    This project is MIT licensed.

    (back to top)

    Visit original content creator repository

  • GAMR

    GAMR: An Enhanced Non-dominated Sorting Genetic Algorithm II-based Dynamic multi-objective QoS routing in Software-Defined Networks

    Routing optimization plays a crucial role in traffic engineering, aiming to efficiently allocate network resources to meet various service requirements. In dynamic network environments, however, network configurations constantly change, therefore single-objective routing faces numerous challenges in managing multiple concurrent demands. Moreover, the complexity of the problem increases as end-to-end Quality of Service (QoS) requirements and conflicts between them accumulate. Several approaches have been proposed to address this issue, but most of them fall into the stability-plasticity dilemma or involve excessive computation or convergence times in practical implementations. \textcolor{blue}{We introduce a dynamic multi-objective QoS routing approach based on NSGA-II (Non-dominated Sorting Genetic Algorithm II), called GAMR, utilizing QoS metrics to construct a multi-objective function.} By proposing new initialization and crossover strategies, our solution can find optimal solutions within a short runtime. Additionally, The GAMR application is deployed on the control plane within Software-Defined Networks (SDNs) and evaluated against benchmark methods under various settings. Compared to multi-objective algorithms, the proposed method demonstrates significant improvements in performance indicators. It shows enhancements ranging from 3.4% to 22.8% on the Hypervolume (HV) metric and from 33% to 86% on the Inverted Generational Distance (IGD) metric. Regarding the objectives, the experimental results demonstrate that our method reduces the forwarding delay and packet loss rate to 41.25ms and 3.9%, respectively, under the most challenging network configuration scenario (with only 2 servers and up to 100 requests) on the Chinanet network (44 nodes).

    Requirements

    The project requires the following Python libraries:

    • ryu
    • fastapi[all]
    • mininet
    • networkx
    • numpy
    • requests

    Project Structure

    The project is organized into four main modules:

    • dynamicsdn: Contains the NSGA-II implementation and Dijkstra’s algorithm for path selection.
    • scenario: Stores different scenarios for automated testing.
    • sdndb: Provides a database for storing network information and routing results.
    • mn_restapi: Implements a REST API hook for interacting with Mininet scripts.

    Startup Instructions

    1. Set environment variable: Before starting the application, add the parent directory of the project to your Python path. In your terminal, run:
    export PYTHONPATH={parent directory of project}
    1. Start application: Run the following command to start both Ryu and the application:
    ./startup.py

    Further Information

    • For detailed documentation, please refer to the project’s internal documentation.
    • Feel free to open issues on GitHub for any questions or feedback.

    Visit original content creator repository

  • image-stitching

    Image Stitching

    Although there are multiple implementations of image stitching using different libraries (OpenCV, skimage, …) this repository contains image stitching using only numpy for computations and cv2 for displaying images. Both projective transformation fitting and RANSAC are implemented using numpy, but the undistortion is done using cv2 library.

    Repository Contents

    There are 2 example photos already present in distorted and undistorted form: examples/distorted000.png, examples/undistorted000.png, examples/distorted001.png, examples/undistorted001.png:

    Example 000:

    distorted000 undistorted000

    Example 001:

    distorted001 undistorted001

    If you wish to take your own photos using ESP32-CAM you can capture them using image_capture.py script. You can take a screenshot with the space bar. The repository also contains examples/matching_points_000_001.txt – 5 pairs of matching keypoints between images examples/undistorted000.png and examples/undistorted001.png picked by hand.

    Contents of examples/matching_points_000_001.txt:

    295 264 25 272
    295 368 27 385
    335 266 72 271
    336 369 76 381
    507 265 251 266
    

    How To Run The Code

    To setup the environment it is recommended to create a virtual environment in this directory:

    python -m venv venv

    Activate the virtual environment and install required dependencies:

    . venv/bin/activate
    pip install -r requirements.txt

    There are 2 main scripts that can be executed:

    1. Run python stitch_using_points.py <path_to_img1> <path_to_img2> <path_to_points_file> to generate a stitched image from 2 given images using hand-picked matching keypoints. You can simply run python stitch_using_points.py to use example images. Sample panorama generated by this script examples/panorama_task_5.png: panorama_task_5

    2. Run python stitch.py <path_to_img1> <path_to_img2> to generate a stitched image from 2 given images using automatically picked matching keypoints. You can simply run python stitch.py to use example images. Sample panorama generated by this script examples/panorama_task_7.png: panorama_task_7

    Running Tests

    Execute:

    pytest .
    Visit original content creator repository
  • onboarding_human_ai

    IntegrAI: Effective Human-AI Teams via Learned Natural Language Rules and Onboarding

    Associated code for paper Effective Human-AI Teams via Learned Natural Language Rules and Onboarding published in NeurIPS 2023 (spotlight).

    What is it?

    People are relying on AI agents to assist them with various tasks. The human must know when to
    rely on the agent, collaborate with the agent, or ignore its suggestions. Our procedure gives a way to understand better how the human and the AI should collaborate.

    The first piece is the human’s prior knowledge and trust of the AI, i.e., does the human trust the AI on all the data, never trust the AI, or trust the AI on only a subset of the data? Given the human’s prior, we discover and describe regions of the data space that disprove the human’s prior. For example, if human always trusted AI, we find a (or many) subset (s) of the data and describe it in natural language where the AI has worse performance than the human (and vice versa).

    Concretely, our procedure is composed of two parts:

    • A region discovery algorithm (IntegrAI-discover) that discovers such subsets of the data space as local neighborhoods in a cross-modal embedding space.

    • A region description algorithm (IntegrAI-describe) that describes these subsets in natural language using large language models (LLMs).

    Each of these algorithms is implemented in this repo as well as baseline approaches with multiple datasets to test them on.

    The algorithm IntegrAI can be used to compare two models or look at the errors of a single model.

    For a demo, see colab jupyter notebook. The main code is in the folder integrai

    Overview of IntegrAI procedure

    Installation

    Clone the repo:

    git clone https://github.com/clinicalml/onboarding_human_ai.git

    For using the IntegrAI algorithm and the demo, the following requirements suffice:

    pip install -r requirements.txt

    For replicating the paper results:

    cd into the repo and create a new conda environment (Python 3.8.13) from our environment.yml file:

    conda env create -f environment.yml

    Finally, activate the environment:

    conda activate onboardai

    To download pre-processed datasets and user study data, use this Google storage link https://storage.googleapis.com/public-research-data-mozannar/data_saved_onboarding.zip

    Demo and Guide

    For an example of how to use IntegrAI, we provide an example on an image classification task in the notebook demo_imagenet.ipynb. For a Colab version, please check colab jupyter notebook

    An NLP demo will soon be provided as well.

    Organization

    This code repository is structured as follows:

    • in integrai we have a minimal code implementation of our algorithm IntegrAI – if you’re just interested in applying the method, only look at this folder

    • in src we have the code for the core functionalities of our algorithms for the paper organized as follows:

      src/datasets_hai has files for each dataset used in our method and code to download and process the datasets.

      src/describers has files for each region description method in our paper

      src/teacher_methods has files for each region discovery method in our paper

      src/teacher_methods has a notebook to showcase the human-AI card

    • in interface_user_study we have the raw code for the interfaces used in the BDD and MMLU user studies (with Firebase)

    • in experiments we have jupyter notebooks to reproduce the results in Section 6 (method evaluation)

    • in user_study_analysis we have jupyter notebooks to reproduce the results in Section 7 (user study results)

    Paper Reproducibility

    To reproduce figures and results from the paper, you can run the following notebooks:

    Note: all experiments involve randomness, so results are not deterministic.

    Citation

    @article{mozannar2023effective,
         title={Effective Human-AI Teams via Learned Natural Language Rules and Onboarding}, 
          author={Hussein Mozannar and Jimin J Lee and Dennis Wei and Prasanna Sattigeri and Subhro Das and David Sontag},
          year={2023},
          journal={Advances in Neural Information Processing Systems}
    }
    

    Acknowledgements

    This work is partially funded by the MIT-IBM Watson AI Lab.

    Visit original content creator repository