跳到主要内容

5 篇博文 含有标签「real-time」

查看所有标签

· 阅读需 5 分钟

Using the events in GitHub for analysis can help you better understand the behavior of developer on GitHub. By analyzing data, you can draw various useful conclusions to support your business needs.

Concept

About events in GitHub

GitHub is a web-based hosting service that provides version control and collaboration features for software development projects. It allows users to create and store repositories for their projects, track changes to code, collaborate with others on coding projects, and contribute to open source software projects.

GitHub provides a number of features for tracking events related to your projects. These events include:

  1. Push events - occur when a user pushes code changes to a repository.
  2. Pull request events - occur when a user creates, updates, or closes a pull request.
  3. Issue events - occur when a user creates, updates, or closes an issue.
  4. Release events - occur when a user creates a new release of a project.
  5. Fork events - occur when a user forks a repository.
  6. Watch events - occur when a user starts or stops watching a repository.

GitHub also allows you to configure webhooks, which can be used to send notifications to external services when certain events occur in your repository. For example, you can configure a webhook to send a notification to a chat service when a pull request is created or updated.

Overall, GitHub provides a comprehensive set of features for tracking events related to your projects and collaborating with others on coding projects.

About Snowflake

Snowflake is a cloud-based data warehousing and analytics platform that allows organizations to store, manage, and analyze large amounts of data in a scalable and cost-effective way.

One of the key features of Snowflake is its architecture, which separates compute and storage. This allows organizations to scale compute and storage resources independently, and pay only for the resources they use. Snowflake also supports both structured and semi-structured data, including JSON, Avro, Parquet, and ORC.

Snowflake also provides a number of built-in features and services for data warehousing and analytics.

Overall, Snowflake provides a modern and flexible solution for data warehousing and analytics in the cloud, with a focus on scalability, performance, and security.

Easy GitHub to Snowflake integration with Vanus

Vanus’s open source connection allows you to integrate Vanus with your GitHub events to track event data and automatically send it to Snowflake.

Prerequisites

  • GitHub: your open-source repository.
  • Snowflake: a working Snowflake account.
  • Go to Vanus Playground :an online K8s environment where Vanus can be deployed.

Step 1: Deploying Vanus

  1. Login Vanus Playground.

  2. Refer to the Quick Start document to complete the Install Vanus & Install vsctl.

  3. Create an eventbus

    ~ # vsctl eventbus create --name github-snowflake
    +----------------+------------------+
    | RESULT | EVENTBUS |
    +----------------+------------------+
    | Create Success | github-snowflake |
    +----------------+------------------+

Step 2: Deploy the GitHub Source

  1. Set config file. Create config.yml in any directory, the content is as follows:

    target: http://192.168.49.2:30002/gateway/github-snowflake
    port: 8082
  2. Run the GitHub Source

    docker run -it --rm --network=host \
    -v ${PWD}:/vanus-connect/config \
    --name source-github public.ecr.aws/vanus/connector/source-github > a.log &
  3. Create a webhook under the Settings tab in your GitHub repository.

    setting-github

    Payload URL*

    Go to GitHub-Twitter Scenario under Payload URL.

    http://ip10-1-53-4-cfie9skink*******0-8082.direct.play.linkall.com

    Content type

    application/json

    Which events would you like to trigger this webhook?

    Send me everything.

Step 3: Deploy the Snowflake Sink

  1. Create a yml file named sink-snowflake.yml in the playground with the following command:

    cat << EOF > sink-snowflake.yml
    apiVersion: v1
    kind: Service
    metadata:
    name: sink-snowflake
    namespace: vanus
    spec:
    selector:
    app: sink-snowflake
    type: ClusterIP
    ports:
    - port: 8080
    name: sink-snowflake
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: sink-snowflake
    namespace: vanus
    data:
    config.yml: |-
    port: 8080
    snowflake:
    host: "myaccount.ap-northeast-1.aws.snowflakecomputing.com"
    username: "vanus_user"
    password: "snowflake"
    role: "ACCOUNTADMIN"
    warehouse: "xxxxxx"
    database: "VANUS_DB"
    schema: "public"
    table: "vanus_test"
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sink-snowflake
    namespace: vanus
    labels:
    app: sink-snowflake
    spec:
    selector:
    matchLabels:
    app: sink-snowflake
    replicas: 1
    template:
    metadata:
    labels:
    app: sink-snowflake
    spec:
    containers:
    - name: sink-snowflake
    image: public.ecr.aws/vanus/connector/sink-snowflake
    imagePullPolicy: Always
    resources:
    requests:
    memory: "128Mi"
    cpu: "100m"
    limits:
    memory: "128Mi"
    cpu: "100m"
    ports:
    - name: http
    containerPort: 8080
    volumeMounts:
    - name: config
    mountPath: /vanus-connect/config
    volumes:
    - name: config
    configMap:
    name: sink-snowflake
    EOF
  2. Replace the config value with yours.

    host: "myaccount.ap-northeast-1.aws.snowflakecomputing.com"
    username: "vanus_user"
    password: "snowflake"
    role: "ACCOUNTADMIN"
    warehouse: "xxxxxx"
    database: "VANUS_DB"
    schema: "public"
    table: "vanus_test"
  3. Run snowflake sink in kubernetes.

    kubectl apply -f sink-snowflake.yaml

Step 4: Create subscription

Through the deployment of the above parts, the components required to push GitHub events to Snowflake have been deployed. And GitHub events can be filtered and processed through the filter and transformer capabilities of Vanus.

  1. Through a filter, you can filter out other events and only post the GitHub events they are interested in. For example:StarEvent(CloudEvents Adapter specification).
  2. Create a subscription in Vanus, and set up a transformer to extract and edit key information.
vsctl subscription create  \
--eventbus github-snowflake \
--sink 'http://sink-snowflake:8080' \
--filters '[
{ "exact": { "type": "com.github.star.created" } }
]'
--transformer '{
"define": {
"login": "$.data.repository.owner.login",
"star": "$.data.repository.stargazers_count",
"repo": "$.data.repository.html_url"
"sender":"$.data.sender.login",
"time":"$.data.repository.updated_at"
},
"template": "{\"owner\": \"<login>\",\"star\":\"<star>\",\"repo\":\"<repo>\",\"sender\":\"<sender>\",\"time\":\"<time>\"}"
}'

Step 5: Test

Open the Snowflake console and use the following command to make sure Snowflake has the data.

select * from public.vanus_test;

Summary

Snowflake provide a powerful platform for analyzing GitHub event data. By loading the data into Snowflake, creating tables, cleaning data, analyzing data, visualizing data, and drawing conclusions, you can get deep insights about the value of GitHub events and use those insights to optimize your business decisions.

· 阅读需 12 分钟

As a developer for a popular e-commerce website, you know that integrating with external APIs is a common requirement for modern applications. However, if your website's database is built using MySQL, you may face limitations when it comes to making HTTP requests. To overcome this challenge, you can build a custom MySQL pipeline that can send HTTP requests to the API. In this article, we will explore how to build such a pipeline using Vanus, a lightweight tool designed to stream data from MySQL databases to HTTP endpoints. With Vanus, you can easily integrate your MySQL database with external APIs, allowing your application to benefit from the latest data and functionality available in third-party services.

Table of content

Event Streaming

Event streaming is a technology that has gained significant popularity in modern applications. It involves the continuous and real-time processing of events or data generated by various sources. These events could include user actions, system events, sensor data, and more. By processing these events in real-time, applications can quickly respond to changes and make decisions based on the most up-to-date information.

Event streaming is particularly important in modern applications where data volumes are high and the need for real-time processing is critical. Traditional batch processing methods, where data is collected and processed in batches, can result in latency and delay in processing important events. Event streaming allows for a more responsive and real-time approach to data processing, which is essential in today's fast-paced digital landscape.

Vanus is an open-source tool designed to facilitate event streaming from various sources. It allows users to collect, filter, and route events to different destinations in real-time. Vanus enables users to build flexible and robust event streaming pipelines that can be easily integrated into modern applications.

MySQL

Setting up a MySQL database

Setting up a MySQL database is the first step towards building a custom MySQL pipeline. Here's a detailed explanation of how to set up a MySQL database:

  1. Download and Install MySQL: The first step is to download and install MySQL on your computer. You can download MySQL Community Edition for free from the MySQL website. Make sure to choose the correct version for your operating system.
  2. Configure MySQL: After installing MySQL, you need to configure it. During the installation process, you will be prompted to set a root password for the MySQL server. Make sure to remember this password, as you will need it later.
  3. Start MySQL Server: Once you have installed and configured MySQL, you need to start the MySQL server. To do this, open a terminal or command prompt and run the following command:
Copy code
sudo systemctl start mysqld
  1. This command starts the MySQL server and enables it to run in the background. Log in to MySQL: To interact with the MySQL server, you need to log in to it using the root password you set during the configuration process. To do this, run the following command:
mysql -u root -p
  1. This command logs you in to the MySQL server as the root user. Create a Database: Once you are logged in to the MySQL server, you can create a new database using the following command:
Copy code
CREATE DATABASE <database_name>;
  1. Replace <database_name> with the name you want to give your database. Create a Table: After creating a database, you need to create a table in the database. Tables are used to store data in a MySQL database. You can create a table using the following command:
Copy code
CREATE TABLE <table_name> (
<column_name> <data_type> <constraint>,
<column_name> <data_type> <constraint>,
...
);
  1. Replace \<table_name> with the name you want to give your table. \<column_name> represents the name of the column you want to create, and \<data_type> represents the data type of the column. \<constraint> is an optional parameter that sets constraints on the column. Insert Data: After creating a table, you can insert data into it using the following command:
INSERT INTO <table_name> (<column_name>, <column_name>, ...) VALUES (<value>, <value>, ...);

Replace \<table_name> with the name of your table, \<column_name> with the name of the column you want to insert data into, and \<value> with the value you want to insert.

With these steps, you have set up a MySQL database and created a table with data. Now you can move on to building your custom MySQL pipeline that can send HTTP requests to an external API.

MySQL to HTTP scenarios

here are 10 real-life scenarios where you might need to set up a MySQL to HTTP pipeline:

  • E-commerce website: As mentioned earlier, if you are building an e-commerce website with MySQL as the database, you may need to integrate with an external API that provides shipping or payment services. A MySQL to HTTP pipeline can be used to send data from the database to the API.
  • Healthcare applications: Healthcare applications often need to integrate with external systems that provide electronic health records or patient information. A MySQL to HTTP pipeline can be used to securely transmit data to these systems.
  • Financial applications: Financial applications may need to integrate with external systems that provide stock market data or banking services. A MySQL to HTTP pipeline can be used to send data to these systems.
  • Social media platforms: Social media platforms may need to integrate with external systems that provide analytics or advertisement services. A MySQL to HTTP pipeline can be used to send data from the database to these systems.
  • Customer relationship management (CRM) systems: CRM systems may need to integrate with external systems that provide customer support or sales services. A MySQL to HTTP pipeline can be used to send data from the database to these systems.
  • Manufacturing and logistics: Manufacturing and logistics applications often need to integrate with external systems that provide supply chain management or inventory control services. A MySQL to HTTP pipeline can be used to send data to these systems.
  • IoT devices: IoT devices often generate large amounts of data that needs to be stored and analyzed. A MySQL to HTTP pipeline can be used to send this data to external analytics or visualization tools.
  • Gaming platforms: Gaming platforms may need to integrate with external systems that provide player statistics or leaderboard services. A MySQL to HTTP pipeline can be used to send data from the database to these systems.
  • Government services: Government services often need to integrate with external systems that provide data on weather, traffic, or crime statistics. A MySQL to HTTP pipeline can be used to send data from the database to these systems.
  • Educational platforms: Educational platforms may need to integrate with external systems that provide content or assessment services. A MySQL to HTTP pipeline can be used to send data from the database to these systems.

Pre-requisite

  • A MySQL Server
  • A Kubernetes cluster (We will use the playground)
  • A webhook server (For testing use webhook for a free endpoint)

How to send customized events from MySQL to HTTP

Here are the steps you can follow to send email notifications from any MySQL event.

Step 1: Deploy Vanus on the Playground

  • Wait until the K8s environment is ready (usually less than 1 min).

  • Install Vanus by typing the following command:

    kubectl apply -f https://dl.vanus.ai/all-in-one/v0.6.0.yml

  • Verify if Vanus is deployed successfully:

$ watch -n2 kubectl get po -n vanus
vanus-controller-0 1/1 Running 0 96s
vanus-controller-1 1/1 Running 0 72s
vanus-controller-2 1/1 Running 0 69s
vanus-gateway-8677fc868f-rmjt9 1/1 Running 0 97s
vanus-store-0 1/1 Running 0 96s
vanus-store-1 1/1 Running 0 68s
vanus-store-2 1/1 Running 0 68s
vanus-timer-5cd59c5bf-hmprp 1/1 Running 0 97s
vanus-timer-5cd59c5bf-pqkd5 1/1 Running 0 97s
vanus-trigger-7685d6cc69-8jgsl 1/1 Running 0 97s
  • Install vsctl (the command line tool).

    curl -O https://dl.vanus.ai/vsctl/latest/linux-amd64/vsctl
    chmod ug+x vsctl
    mv vsctl /usr/local/bin
  • Set the endpoint for vsctl.

    export VANUS_GATEWAY=192.168.49.2:30001
  • Create an Eventbus to store your events.

    $ vsctl eventbus create --name mysql-http-scenario
    +----------------+--------------------+
    | RESULT | EVENTBUS |
    +----------------+--------------------+
    | Create Success | mysql-http-scenario|
    +----------------+--------------------+

Step 2: Deploy the MySQL Source Connector

  • Enable binary logging if you have disabled it (MySQL default Enabled). Create a new USER and grant roles, choose a unique password for the user.

To enable binary logging in MySQL, you need to perform the following steps:

  1. Open the MySQL configuration file, which is typically located at /etc/mysql/my.cnf on Linux or C:\ProgramData\MySQL\MySQL Server 8.0\my.ini on Windows.
  2. Look for the [mysqld] section of the configuration file, which contains various settings for the MySQL server.
  3. Add the following line to the [mysqld] section to enable binary logging:
log-bin=mysql-bin

This will tell MySQL to create binary log files in the mysql-bin directory. You can change the name of the directory if you prefer.

  1. Save the configuration file and restart the MySQL server for the changes to take effect:
sudo service mysql restart

or

sudo systemctl restart mysql
  1. Verify that binary logging is enabled by logging into the MySQL server and running the following command:
SHOW MASTER STATUS;

This will display information about the binary log files that are currently being used by the MySQL server. If binary logging is not enabled, this command will return an error.

  CREATE USER 'vanus'@'%' IDENTIFIED WITH mysql_native_password BY 'PASSWORD';
GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON . TO 'vanus'@'%';
  • Create the config file for MySQL in the Playground. Change MYSQL_HOST, MYSQL PORT, PASSWORD, DATABASE_NAME, and TABLE_NAME.
cat << EOF > config.yml
target: http://192.168.49.2:30002/gateway/mysql-http-scenario # Vanus in Playground
name: "quick_start"
db:
host: "MYSQL_HOST" # IP address of MySQL server
port: MYSQL PORT # port address of MySQL server
username: "vanus" # Username
password: "PASSWORD" # Password previously set
database_include: [ "<DATABASE_NAME>" ] # The name of your database


# format is vanus_test.tableName
table_include: [ "TABLE_NAME" ] # The name of your Table

store:
type: FILE
pathname: "/vanus-connect/data/offset.dat"

db_history_file: "/vanus-connect/data/history.dat"
EOF
  • Run MySQL Source in the background
  docker run -it --rm --network=host \
-v ${PWD}:/vanus-connect/config \
-v ${PWD}:/vanus-connect/data \
--name source-mysql public.ecr.aws/vanus/connector/source-mysql &

Step 3: Deploy the HTTP Sink Connector

To run the HTTP sink in Kubernetes, you will need to follow the below steps:

  • Create a Kubernetes deployment file (e.g., sink-http.yaml) that includes the following configurations:
cat << EOF > sink-http.yml
apiVersion: v1
kind: Service
metadata:
name: sink-http
namespace: vanus
spec:
selector:
app: sink-http
type: ClusterIP
ports:
- port: 8080
name: sink-http
---
apiVersion: v1
kind: ConfigMap
metadata:
name: sink-http
namespace: vanus
data:
config.yml: |-
port: 8080
target: http://address.com
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sink-http
namespace: vanus
labels:
app: sink-http
spec:
selector:
matchLabels:
app: sink-http
replicas: 1
template:
metadata:
labels:
app: sink-http
spec:
containers:
- name: sink-http
image: public.ecr.aws/vanus/connector/sink-http:latest
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "100m"
ports:
- name: http
containerPort: 8080
volumeMounts:
- name: config
mountPath: /vanus-connect/config
volumes:
- name: config
configMap:
name: sink-http
EOF

  • Edit the configuration in it.
vi sink-http.yaml

NOTE: Remember to replace values of URL and Port.

Check out the results

  • Finally, you can create a subscription that will define how the events should be transformed before being sent to the sink connector.
  • You can use the following command to create a subscription:
Copy code
vsctl subscription create \
--name mysql-http-subscription \
--eventbus mysql-http-scenario \
--sink sink-http \
--transformer '{
"define": {
"user": "$data.user",
"password": "$data.passwprd",
"email": "$data.email"
},
"template": {
"User": "<user>",
"Password": "<password>",
"Email": "<email>"
}
}'

Here, we are creating a subscription named "mysql-http-subscription" that will consume events from the "mysql-http-scenario" Eventbus and send them to the "sink-http" sink connector. We are defining three variables using the "define" parameter: "user", "password", and "email", which will store the corresponding values from the incoming events. Finally, we are using the "template" parameter to create a JSON template that will replace the variables with their corresponding values in the transformed events. Once you have created the subscription, it will start consuming events from the Eventbus, transform them according to the specified rules, and send them to the HTTP endpoint using the sink connector.

Conclusion

In conclusion, connecting MySQL to HTTP endpoints can be a powerful tool for data integration and automation. By using Vanus, we can easily set up subscriptions to capture changes in MySQL databases and send them to HTTP endpoints in real-time, without the need for complex coding or configuration. This can enable a wide range of use cases, from syncing data between systems to triggering workflows based on database events. With the step-by-step guide and examples provided in this article, you should now have a good understanding of how to use Vanus to create MySQL-to-HTTP subscriptions and customize them using the transformer feature. We hope this article has been helpful in getting you started with this powerful tool and exploring the possibilities it offers for your data integration needs.

· 阅读需 11 分钟

Welcome to my blog on how to get notifications from MySQL to email. For businesses and organizations that rely on MySQL to manage their data, staying informed about changes to the database is essential. However, manually monitoring the database for updates can be time-consuming and prone to human error.

Thankfully, Vanus provides a solution for this problem by allowing users to set up an event pipeline that automatically sends email notifications whenever a particular event occurs in the database. In this blog, I will provide a step-by-step guide on how to set up this feature and customize it to fit your specific needs. Whether you are a MySQL user who wants to streamline their database management, or a database administrator who needs to stay informed about updates, this blog will provide you with the knowledge and tools you need to set up email notifications for your MySQL database. So let's dive in and learn how to get notifications from MySQL to email!

· 阅读需 6 分钟

Table of Contents

Introduction

When it comes to low-traffic websites, storing logs on the web server may not cause any issues. However, for high-traffic websites such as e-commerce sites that receive millions of requests per day, storing such a massive amount of logs can pose some challenges. Firstly, it can require more resources to handle the logs, which can increase the cost of maintaining the website. Additionally, if there is a problem with the server, the log files may not be accessible, which can make troubleshooting difficult.

  • What is Amazon S3?

    Amazon Simple Storage Service (Amazon S3) provides is an object storage service that provides performance, security, and scalability that are unmatched in the market. For a variety of use cases, including data lakes, websites, mobile applications, backup and restore, archives, business applications, IoT devices, and big data analytics, customers of all sizes and sectors may use Amazon S3 to store and preserve any quantity of data. To meet your unique business, organizational, and compliance needs, Amazon S3 offers management options that allow you to optimize, organize, and configure access to your data.

  • What is HTTP Request?

    A client sends an HTTP request to a named host on a server. Accessing a server resource is the purpose of the request.

    The client uses parts of a URL (Uniform Resource Locator), which contains the information required to access the resource, to submit the request. URLs are explained by looking at their constituent parts.

    The following components are found in a properly constructed HTTP request: A line for requests. A number of header fields or HTTP headers. A message body, if required.

    In this tutorial, I will show you how you can use Vanus connect to build a highly available and persistent log stream from HTTP requests made to your web server and store them in Amazon S3 bucket.

Pre-requisite

  • Have a container runtime (i.e., docker).
  • An Amazon S3 bucket.
  • AWS IAM Access Key.
  • AWS permissions for the IAM user:
    • s3:PutObject

Now, l will show you a step-by-step guide on how to build your own persistent log stream.

How to Log HTTP Requests to S3 Bucket

For this tutorial, we will be using the Vanus Playground; An online Kubernetes environment.

Step 1: Deploy Vanus on the Playground

img.png

  • Wait for preparing the K8s environment (usually less than 1 min). The terminal is ready when you see something like:

img_1.png

  • Install Vanus by typing following command:
kubectl apply -f https://dl.vanus.ai/all-in-one/v0.6.0.yml
  • Verify if Vanus is deployed successfully:
 $ watch -n2 kubectl get po -n vanus
vanus-controller-0 1/1 Running 0 96s
vanus-controller-1 1/1 Running 0 72s
vanus-controller-2 1/1 Running 0 69s
vanus-gateway-8677fc868f-rmjt9 1/1 Running 0 97s
vanus-store-0 1/1 Running 0 96s
vanus-store-1 1/1 Running 0 68s
vanus-store-2 1/1 Running 0 68s
vanus-timer-5cd59c5bf-hmprp 1/1 Running 0 97s
vanus-timer-5cd59c5bf-pqkd5 1/1 Running 0 97s
vanus-trigger-7685d6cc69-8jgsl 1/1 Running 0 97s
  • Install vsctl (the command line tool).
curl -O https://dl.vanus.ai/vsctl/latest/linux-amd64/vsctl
chmod ug+x vsctl
mv vsctl /usr/local/bin
  • Set the endpoint for vsctl.
export VANUS_GATEWAY=192.168.49.2:30001
  • Create an Eventbus to store your events.
vsctl eventbus create --name http-s3
+----------------+------------------+
| RESULT | EVENTBUS |
+----------------+------------------+
| Create Success | http-s3 |
+----------------+------------------+

Step 2: Make directory for HTTP Source Connector and S3 Sink Connector and Create Config file

  • Make HTTP Source Directory
mkdir http-source
  • Create Config file
cat << EOF > config.yml
target: http://192.168.49.2:30002/gateway/http-to-s3
port: 31081
EOF
  • Use docker run to run the HTTP Source config.yml file
docker run -it --rm --network=host \
-v ${PWD}:/vanus-connect/config \
--name source-http public.ecr.aws/vanus/connector/source-http &

Note: I ran this in the Background of my terminal, if you wish to see the outputs, remove the ampersand (&) at the end

  • Make S3 Sink Directory
mkdir s3-sink
  • Create Config file
cat << EOF > config.yml
port: 8080
aws:
access_key_id: your_access_key
secret_access_key: your_secret_key
region: "your_region"
bucket: "your_bucket_name"
scheduled_interval: 10
EOF
  • Usedocker run to run the S3 sink config.yml file
docker run -it --rm \
-p 8082:8080 \
-v ${PWD}:/vanus-connect/config \
--name sink-aws-s3 public.ecr.aws/vanus/connector/sink-aws-s3 &

Note: I ran this in the Background of my terminal, if you wish to see the outputs, remove the ampersand (&) at the end

Step 3: Create Subscription and Make a request using CURL

  • The Subscription is a relationship established between a Sink and an Eventbus. The Subscription reflects the Sink's interest in receiving events and describes the method for how to deliver those events. To create a subcription, use
vsctl subscription create --name http \
--eventbus http-to-s3 \
--sink 'http://ip10-1-39-4-cecpi79ajm80o97dfdug-8082.direct.play.linkall.com'

The sink URL (http://ip10-1-39-4-cecpi79ajm80o97dfdug-8082.direct.play.linkall.com) is different for different users. To obtain your unique URL, you have to follow these steps:

  1. Go to Vanus Playground, and click “Continue with GitHub”

    img.png

  2. Click on the GitHub-Twitter Scenario Tab

    img_4.png

  3. Scroll down and look for Payload URL

    img_5.png

  4. Copy the Payload URL and paste it into the Sink URL

  5. Make a request using CURL

curl --location --request POST 'localhost:31081' \
--header 'Content-Type: application/cloudevents+json' \
--data-raw '{
"id": "53d1c340-551a-11ed-96c7-8b504d95037c",
"source": "quickstart",
"specversion": "1.0",
"type": "quickstart",
"datacontenttype": "application/json",
"time": "2022-10-26T10:38:29.345Z",
"data": {
"myData": "Hello S3 Bucket!"
}
}'

Check out the result

Check your S3 bucket, you will see a folder containing files have been uploaded

img_2.png

We can now see that our S3 bucket was able to pull the request when we used CURL. The S3 sink supports partitioning; files can be pulled on an hourly or daily basis.

We can inspect our file and see the data received by our S3 bucket. img_3.png

Conclusion

In this tutorial, I have shown how you how you can use Vanus connect to build a highly available and persistent log stream from HTTP requests made to your website and store them in Amazon S3 bucket.

· 阅读需 5 分钟

Often, the sales team or marketing needs database information about buyers, members, or users, demanding IT to provide such information.

Today I will show you how we can automatically and safely take the data entries in real-time from a MySQL database, transform the messages in a way that makes sense to the team, and send them directly to a Slack channel without needing physical input each time.

playground loading