Complete on-chain tasks and get a constant stream of income
Description
Repudiation (Denial of an act) is one of the major issues in creating offers or agreements with anyone online. It requires trust that a person would provide the money/service on completing the deal. SuperOffers solves this problem. The entire logic is handled on-chain without any DAO or third-party involved which makes it completely trust-free and transparent.
The workflow involves two different types of users:
Offerer
Claimer
The Offerer creates a super offer declaring the total bounty reward etc. Here comes the interesting part. Offer is not created by any off-chain logic. The offerer enters the contract address, contract abi and chooses a function signature and an expected value on calling the function. Multiple functions from multiple contracts can be added for the same offer. When any claimer satisfies this logic, he can apply for a claim and the smart contract starts streaming him tokens until the Offer timeline ends. This way the entire task completion and money streaming is done completely on-chain with no chance of denying an act or betrayal.
I am looking forward for the Streaming Distributions (Coming soon) feature in SuperFluid which would be the perfectly suited for this usecase. It would also save up some of my complex smart contract logic π .
Possible use cases for Super Offers can be:
Hold 2 Bored Ape NFTs
Hold 25 LINK Tokens.
Make 10 proposals in CompoundDAO
Hold Token #1200 in CyperPunks NFT collection.
The possibilities are endless..
Challenges
I struggled with developing the logic for my solidity smart contracts. I ran into a lot of issues when making low level calls to smart contracts and then when debugging and verifying the execution of the logic.
I have developed dApps with theGraph but this time I ran into a lot of issues in depolying my subgraph.
All other challenges were manageable π
This guide helps Salesforce developers who are new to Visual Studio Code go from zero to a deployed app using Salesforce Extensions for VS Code and Salesforce CLI.
Part 1: Choosing a Development Model
There are two types of developer processes or models supported in Salesforce Extensions for VS Code and Salesforce CLI. These models are explained below. Each model offers pros and cons and is fully supported.
Package Development Model
The package development model allows you to create self-contained applications or libraries that are deployed to your org as a single package. These packages are typically developed against source-tracked orgs called scratch orgs. This development model is geared toward a more modern type of software development process that uses org source tracking, source control, and continuous integration and deployment.
If you are starting a new project, we recommend that you consider the package development model. To start developing with this model in Visual Studio Code, see Package Development Model with VS Code. For details about the model, see the Package Development Model Trailhead module.
If you are developing against scratch orgs, use the command SFDX: Create Project (VS Code) or sfdx force:project:create (Salesforce CLI) to create your project. If you used another command, you might want to start over with that command.
When working with source-tracked orgs, use the commands SFDX: Push Source to Org (VS Code) or sfdx force:source:push (Salesforce CLI) and SFDX: Pull Source from Org (VS Code) or sfdx force:source:pull (Salesforce CLI). Do not use the Retrieve and Deploy commands with scratch orgs.
Org Development Model
The org development model allows you to connect directly to a non-source-tracked org (sandbox, Developer Edition (DE) org, Trailhead Playground, or even a production org) to retrieve and deploy code directly. This model is similar to the type of development you have done in the past using tools such as Force.com IDE or MavensMate.
If you are developing against non-source-tracked orgs, use the command SFDX: Create Project with Manifest (VS Code) or sfdx force:project:create --manifest (Salesforce CLI) to create your project. If you used another command, you might want to start over with this command to create a Salesforce DX project.
When working with non-source-tracked orgs, use the commands SFDX: Deploy Source to Org (VS Code) or sfdx force:source:deploy (Salesforce CLI) and SFDX: Retrieve Source from Org (VS Code) or sfdx force:source:retrieve (Salesforce CLI). The Push and Pull commands work only on orgs with source tracking (scratch orgs).
The sfdx-project.json File
The sfdx-project.json file contains useful configuration information for your project. See Salesforce DX Project Configuration in the Salesforce DX Developer Guide for details about this file.
The most important parts of this file for getting started are the sfdcLoginUrl and packageDirectories properties.
The sfdcLoginUrl specifies the default login URL to use when authorizing an org.
The packageDirectories filepath tells VS Code and Salesforce CLI where the metadata files for your project are stored. You need at least one package directory set in your file. The default setting is shown below. If you set the value of the packageDirectories property called path to force-app, by default your metadata goes in the force-app directory. If you want to change that directory to something like src, simply change the path value and make sure the directory youβre pointing to exists.
Donβt deploy your code to production directly from Visual Studio Code. The deploy and retrieve commands do not support transactional operations, which means that a deployment can fail in a partial state. Also, the deploy and retrieve commands donβt run the tests needed for production deployments. The push and pull commands are disabled for orgs that donβt have source tracking, including production orgs.
RLCodebase is a modularized codebase for deep reinforcement learning algorithms based on PyTorch. This repo aims to provide an user-friendly reinforcement learning codebase for beginners to get started and for researchers to try their ideas quickly and efficiently.
For now, it has implemented DQN(PER), A2C, PPO, DDPG, TD3 and SAC algorithms, and has been tested on Atari, Procgen, Mujoco, PyBullet and DMControl Suite environments.
Introduction
The design of RLCodebase is shown as below.
Config: Config is a class that contains parameters for reinforcement learning algorithms such as discount factor, learning rate, etc. and general configurations such as random seed, saving path, etc.
Trainer: Trainer is a wrapped class that controls the workflow of reinforcement learning training. It manages the interactions between submodules (Agent, Env, memory).
Agent: Agent chooses actions to take given states. It also defines how to update the model given a batch of data.
Model: Model gathers all neural networks to train.
Env: Env is a vectorized gym environment.
Memory: Memory stores experiences utilized for RL training.
Installtion
All required packages have been included in setup.py and requirements.txt. Mujoco is needed for mujoco_py and dm control suite. To support mujoco_py and dm control, please refer to https://github.com/openai/mujoco-py and https://github.com/deepmind/dm_control. For mujoco_py 2.1.2.14 and dm_control (commit fe44496), you may download mujoco like below
cd ~
mkdir .mujoco
cd .mujoco
# for mujoco_py
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz
tar -xf mujoco210-linux-x86_64.tar.gz
# for dm control
wget https://github.com/deepmind/mujoco/releases/download/2.1.1/mujoco-2.1.1-linux-x86_64.tar.gz
tar -xf mujoco-2.1.1-linux-x86_64.tar.gz
AMTUI is a terminal-based user interface (TUI) application that allows you to interact with Prometheus Alertmanager using your terminal. It provides a convenient way to monitor alerts, view silences, and check the status of Alertmanager instances right from your command line.
Features
View active alerts with details such as severity, alert name, and description.
Browse and review existing silences in Alertmanager.
Filter alerts and silences using matchers.
Check the general status of your Alertmanager instance.
Installation
Using Homebrew
You can install AMTUI using the Homebrew package manager:
brew tap pehlicd/tap
brew install amtui
Using go install
You can install AMTUI using the go install command:
To use AMTUI, you’ll need to have Go installed on your system. Then, you can install AMTUI using the following steps:
Clone the repository:
git clone https://github.com/pehlicd/amtui.git
Navigate to the project directory:
cd amtui
Build the application:
go build
Run the application:
./amtui
Usage
Once you’ve launched AMTUI, you can navigate through different sections using the following keyboard shortcuts:
Press 1 to view and interact with active alerts.
Press 2 to see existing silences.
Press 3 to check the general status of your Alertmanager instance.
Keyboard Shortcuts
q: Quit the application.
l: Focus on the preview list.
h: Focus on the sidebar list.
j: Move focus to the preview.
k: Move focus to the preview list.
CTRL + F: Focus on the filter input.
ESC: Return focus to the sidebar list.
Configuration
AMTUI uses a configuration file to connect to your Alertmanager instance. By default, the application will look for a configuration file at ~/.amtui.yaml. If the configuration file doesn’t exist, AMTUI will guide you through creating it with the necessary connection details.
You can also specify connection details using command-line flags:
amtui --host 127.0.0.1 --port 9093 --scheme http
AMTUI also supports basic authentication. You can specify the username and password using the --username and --password flags:
If you’d like to contribute to AMTUI, feel free to submit pull requests or open issues on the GitHub repository. Your feedback and contributions are highly appreciated!
License
This project is licensed under the MIT License – see the LICENSE file for details.
The source root directory contains a config.yml file that represents the site configuration. Example:
# The title of the site; default: empty stringtitle: 'Site title'# A subheading/slogan; default: empty stringsubHeading: 'A secondary heading'# source/templates directory can contain multiple templates,# the one with this name is rendered; default: empty stringtemplate: 'my-template-name'# Number of posts that is displayed on one page; default: 10postsPerPage: 10# Root url of site; default: "https://github.com/"url: 'https://my-page.com'# Date format string on the site; default: 'yyyy-MM-dd'dateFormat: 'yyyy-MM-dd'# Site-wide meta properties, e.g. 'og:title', etc.metaProperties:
- property: 'og:image'content: '/assets/my-facebook-share-image.jpg'# Google analytics tracker code, could be used in template for rendering google analytics script# optionalgTag: 'UA-12345678-1'# Disqus identifier, could be used in template for rendering disqus scriptdisqusId: 'my-disqus-id'
Pages
Files in the pages directory are rendered in the output root directory directly so they can be reached on the /pagename path. SReserved therefore are not allowed for pages:
assets
pages
posts
tags
index
Page metadata is passed in the file’s front matter header and content is parsed as markdown. Currently pages only have title metadata:
---
# Front matter header is between --- marks
title: 'About'
---
# Here comes the markdown content
...
Posts
Posts are generated based on their filename and available on /posts/post-file-name path so it is recommended to name the original file seo-friendly. The format is the same front matter + markdown as with pages. Example:
---
# Front matter header is between --- marks
# Title of the post
title: 'My first blog post'
# Excerpt that can be used on list page for example
excerpt: 'A longer description of my post'
# Date of creation (list page orders by this)
createdAt: '2020-04-01'
# Author
postedBy: 'codernr'
# Tags
tags: [ 'blog', 'C#', 'Github' ]
# Post specific meta properties, these are merged with site meta properties
# Properties defined here take precedence over site properties so an 'og:image' defined here overrides the default one
metaProperties:
- property: 'og:image'
content: '/assets/img/my-custom-share-image.jpg'
---
# Here comes the markdown content
...
Tags
Tags are collected from posts’ metadata and grouped by slug. Special characters are stripped and hyphens are added when generating slugs so C# and C becomes the same, this should be kept in mind when tagging posts. Tag pages are generated under /tag/tagname path where a list of posts using that tag is passed to the template.
Post pagination
Posts are ordered descending by creation date and paginated as defined in config. The first page is always rendered as index.html and the other pages are rendered under /pages/{page-number} path.
Assets
There are two sources of assets:
site-wide assets in source/assets directory
template specific assets in source/templates/<selected-template>/assets
These folders are merged during generation to the /assets path which means that a template asset with the same name as a site asset overwrites it.
Templates
There is an example template that is used by my blog, forked from Start bootstrap, see it here.
Bloggen.Net uses Handlebars.Net as a templating engine. The pages are rendered from a main template file index.hbs and embedded layout files that are specific for the type of content.
There is a site variable that is available within all the templates called site with the following structure:
{config: {title: '...',subheading: '...',template: '...',postsperpage: '...'// ... and al the config values},tags: [// all the tags ordered by name{/* see details at tag layout description */}],pages: [{/* see details at list layout description */}]}
List layout (layouts/list.hbs)
This template is rendered with the paginated post objects. Pagination objects are available in {{site.pages}} array in the template. Structure:
{pageNumber: 2,items: [{/* see structure at post layout */}]url: '/pages/2'// or "https://github.com/" if on the first page that is index.htmltotalcount: 3// number of pagesprevious: {/* the previous pagination object */}
next: {/* the next pagination object */}}
Page layout (layouts/page.hbs)
Pages are rendered with this template. Page data is available in the {{data}} variable in the template. Structure is the same as described in posts section.
Page content is available in the {{content}} variable. To render markdown as html see helpers.
Post layout (layouts/post.hbs)
The same as pages. Metadata is in the {{data}} variable, content is in {{content}}. See metadata in posts section.
Tag layout (layouts/tag.hbs)
This template is used to render one specific tag’s post list. Tag metadata is available in {{data}} variable:
{name: 'tag name',postreferences: [{/* see details at posts description */}]url: '/tags/tag-name'}
Template helpers
There are two registered helpers in Bloggen.Net by default:
Date helper that renders date objects with the format string from config; usage: {{date datevariable}}
Html helper that renders markdown content as html; usage: {{html content}}
Automation
Since this is a command line tool, it can be used in an automated setup. My own blog is generated this way; a source repository is set up to trigger the generation of files and a git push to my user site repository. For details, see the workflow file.
In this UI demo, you will interact with the UTXO blockchain via the Polkadot UI.
The following example takes you through a scenario where:
Alice already owns a UTXO of value 100 upon genesis
Alice sends Bob a UTXO with value 50, tipping the remainder to validators
Compile and build a release node
cargo +nightly build --release
Start a node. The --dev flag will start a single mining node, and the --tmp flag will start it in a new temporary directory.
./target/release/utxo-workshop --dev --tmp
In the console note the helper printouts. In particular, notice the default account Alice already has 100 UTXO within the genesis block.
Open Polkadot JS making sure the client is connected to your local node by going to Settings > General and selecting Local Node in the remote node dropdown.
Declare custom datatypes in PolkadotJS as the frontend cannot automatically detect this information. To do this, go to Settings > Developer tab and paste in the following JSON:
Confirm that Alice already has 100 UTXO at genesis. In Chain State > Storage, select utxo. Input the hash 0x76584168d10a20084082ed80ec71e2a783abbb8dd6eb9d4893b089228498e9ff. Click the + notation to query blockchain state.
Notice that:
This UTXO has a value of 100
This UTXO belongs to Alice’s pubkey. You use the subkey tool to confirm that the pubkey indeed belongs to Alice
Spend Alice’s UTXO, giving 50 to Bob. In the Extrinsics tab, invoke the spend function from the utxo pallet, using Alice as the transaction sender. Use the following input parameters:
Send as an unsigned transaction. With UTXO blockchains, the proof is already in the sigscript input.
Verify that your transaction succeeded. In Chain State, look up the newly created UTXO hash: 0xdbc75ab8ee9b83dcbcea4695f9c42754d94e92c3c397d63b1bc627c2a2ef94e6 to verify that a new UTXO of 50, belonging to Bob, now exists! Also you can verify that Alice’s original UTXO has been spent and no longer exists in UtxoStore.
The Bookstore React App is a single-page application that allows users to browse and purchase books. It is built using the React JavaScript library and features a navbar and footer that provide navigation throughout the app. Users can register and login to create and manage their accounts, and they can add and remove books from a shopping cart. A search bar allows users to find books by title, author, or genre. A list of books that are currently in stock is also available, and each book has a page where users can view more information, such as the book’s description, reviews, and price. Finally, users can view their past orders on an order history page.
The project is built with React, JSX, CSS, and JavaScript. It is also deployed on Heroku, so you can try it out by visiting the live demo.
The App is still under development, but it is a good example of how React can be used to build a dynamic and interactive web application. Some additional features that could be added to the app in the future include the ability to filter books by genre, price, or other criteria; the ability to add books to a wishlist; the ability to rate and review books; the ability to subscribe to email notifications about new books; the ability to purchase books in different currencies; and the ability to translate the app into different languages.
π Built With
React
JSX
CSS
Javascript ES6
Visual Studio Code
ESLint
Stylelint
Key Features
The key features of this project include the following.
A navbar and footer that provide navigation throughout the app.
A register and login form for users to create and manage their accounts.
A shopping cart where users can add and remove books.
A search bar that allows users to find books by title, author, or genre.
A list of books that are currently in stock.
A page for each book where users can view more information, such as the book’s description, reviews, and price.
An order history page where users can view their past orders.
Here are some future features that could be added to the Bookstore React app in the future.
User authentication and authorization: This would allow users to create accounts, sign in and out, and have their own personal bookshelves.
Shopping cart: This would allow users to add books to their cart and checkout.
Payment processing: This would allow users to pay for their purchases with a credit card or other payment method.
Shipping and delivery: This would allow users to track the status of their orders and have their books shipped to them.
Reviews and ratings: This would allow users to leave reviews and ratings of books they have read.
Wishlist: This would allow users to save books they are interested in buying for later.
Personalization: This would allow the app to be customized to each user’s preferences, such as their favorite genres or authors.
Social features: This would allow users to connect with other users, share book recommendations, and discuss books.
These are just a few ideas for future features that could be added to the Bookstore React app. The specific features that are added will depend on the needs and wants of the users.
GAMR: An Enhanced Non-dominated Sorting Genetic Algorithm II-based Dynamic multi-objective QoS routing in Software-Defined Networks
Routing optimization plays a crucial role in traffic engineering, aiming to efficiently allocate network resources to meet various service requirements. In dynamic network environments, however, network configurations constantly change, therefore single-objective routing faces numerous challenges in managing multiple concurrent demands. Moreover, the complexity of the problem increases as end-to-end Quality of Service (QoS) requirements and conflicts between them accumulate. Several approaches have been proposed to address this issue, but most of them fall into the stability-plasticity dilemma or involve excessive computation or convergence times in practical implementations. \textcolor{blue}{We introduce a dynamic multi-objective QoS routing approach based on NSGA-II (Non-dominated Sorting Genetic Algorithm II), called GAMR, utilizing QoS metrics to construct a multi-objective function.} By proposing new initialization and crossover strategies, our solution can find optimal solutions within a short runtime. Additionally, The GAMR application is deployed on the control plane within Software-Defined Networks (SDNs) and evaluated against benchmark methods under various settings. Compared to multi-objective algorithms, the proposed method demonstrates significant improvements in performance indicators. It shows enhancements ranging from 3.4% to 22.8% on the Hypervolume (HV) metric and from 33% to 86% on the Inverted Generational Distance (IGD) metric. Regarding the objectives, the experimental results demonstrate that our method reduces the forwarding delay and packet loss rate to 41.25ms and 3.9%, respectively, under the most challenging network configuration scenario (with only 2 servers and up to 100 requests) on the Chinanet network (44 nodes).
Requirements
The project requires the following Python libraries:
ryu
fastapi[all]
mininet
networkx
numpy
requests
Project Structure
The project is organized into four main modules:
dynamicsdn: Contains the NSGA-II implementation and Dijkstra’s algorithm for path selection.
scenario: Stores different scenarios for automated testing.
sdndb: Provides a database for storing network information and routing results.
mn_restapi: Implements a REST API hook for interacting with Mininet scripts.
Startup Instructions
Set environment variable: Before starting the application, add the parent directory of the project to your Python path. In your terminal, run:
export PYTHONPATH={parent directory of project}
Start application: Run the following command to start both Ryu and the application:
./startup.py
Further Information
For detailed documentation, please refer to the project’s internal documentation.
Feel free to open issues on GitHub for any questions or feedback.
Although there are multiple implementations of image stitching using different libraries (OpenCV, skimage, …) this repository contains image stitching using only numpy for computations and cv2 for displaying images. Both projective transformation fitting and RANSAC are implemented using numpy, but the undistortion is done using cv2 library.
Repository Contents
There are 2 example photos already present in distorted and undistorted form: examples/distorted000.png, examples/undistorted000.png, examples/distorted001.png, examples/undistorted001.png:
Example 000:
Example 001:
If you wish to take your own photos using ESP32-CAM you can capture them using image_capture.py script. You can take a screenshot with the space bar.
The repository also contains examples/matching_points_000_001.txt – 5 pairs of matching keypoints between images examples/undistorted000.png and examples/undistorted001.png picked by hand.
Run python stitch_using_points.py <path_to_img1> <path_to_img2> <path_to_points_file> to generate a stitched image from 2 given images using hand-picked matching keypoints. You can simply run python stitch_using_points.py to use example images. Sample panorama generated by this script examples/panorama_task_5.png:
Run python stitch.py <path_to_img1> <path_to_img2> to generate a stitched image from 2 given images using automatically picked matching keypoints. You can simply run python stitch.py to use example images. Sample panorama generated by this script examples/panorama_task_7.png:
People are relying on AI agents to assist them with various tasks. The human must know when to
rely on the agent, collaborate with the agent, or ignore its suggestions. Our procedure gives a way to understand better how the human and the AI should collaborate.
The first piece is the human’s prior knowledge and trust of the AI, i.e., does the human trust the AI on all the data, never trust the AI, or trust the AI on only a subset of the data? Given the human’s prior, we discover and describe regions of the data space that disprove the human’s prior. For example, if human always trusted AI, we find a (or many) subset (s) of the data and describe it in natural language where the AI has worse performance than the human (and vice versa).
Concretely, our procedure is composed of two parts:
A region discovery algorithm (IntegrAI-discover) that discovers such subsets of the data space as local neighborhoods in a cross-modal embedding space.
A region description algorithm (IntegrAI-describe) that describes these subsets in natural language using large language models (LLMs).
Each of these algorithms is implemented in this repo as well as baseline approaches with multiple datasets to test them on.
The algorithm IntegrAI can be used to compare two models or look at the errors of a single model.
For an example of how to use IntegrAI, we provide an example on an image classification task in the notebook demo_imagenet.ipynb. For a Colab version, please check colab jupyter notebook
An NLP demo will soon be provided as well.
Organization
This code repository is structured as follows:
in integrai we have a minimal code implementation of our algorithm IntegrAI – if you’re just interested in applying the method, only look at this folder
in src we have the code for the core functionalities of our algorithms for the paper organized as follows:
–src/datasets_hai has files for each dataset used in our method and code to download and process the datasets.
–src/describers has files for each region description method in our paper
–src/teacher_methods has files for each region discovery method in our paper
Note: all experiments involve randomness, so results are not deterministic.
Citation
@article{mozannar2023effective,
title={Effective Human-AI Teams via Learned Natural Language Rules and Onboarding},
author={Hussein Mozannar and Jimin J Lee and Dennis Wei and Prasanna Sattigeri and Subhro Das and David Sontag},
year={2023},
journal={Advances in Neural Information Processing Systems}
}
Acknowledgements
This work is partially funded by the MIT-IBM Watson AI Lab.