Quickstart
If you want to test the client you will need some EventStoreDB instance to interact with.
You can have a look at this quickstart, but if you're short on time and you have docker compose installed on your machine, here's your shortcut:
curl -o docker-compose-eventstore-3nodes.yaml https://raw.githubusercontent.com/stefanondisponibile/eventstore_grpc/master/docker-compose.yaml && \
docker compose -f docker-compose-eventstore-3nodes.yaml up -d && \
docker compose logs
This downloads a docker compose file to setup an EventStoreDB instance with 3 nodes.
If you cloned the eventstore_grpc repository you can skip the curl part and use the docker-compose.yaml file directly.
Note
The docker compose file generates SSL certificates and puts them in a certs folder on your local directory. You can delete that directory, but since it was generated by a docker container you will need to be root.
For example, if you want to clean everything up when you're done, you can run the following:
If you want, you can explore the EventStoreDB Admin UI pointing your browser to https://localhost:2111.
Creating a client
Our EventStoreDB instance is running 3 nodes at the followings hosts:
localhost:2111localhost:2112localhost:2113
We want to use tls for secure connections, we're going to read the certificates from the folder that was created with docker compose (./certs/ca/ca.crt) and we're going to use the default admin credentials (admin, changeit).
Getting information about the cluster
Now that we have a client, we can get some information about it:
This should tell you something like this:
members {
instance_id {
structured {
most_significant_bits: 766833313904807028
least_significant_bits: -7167183254000776328
}
}
time_stamp: 16882865214981958
state: Follower
is_alive: true
http_end_point {
address: "127.0.0.1"
port: 2113
}
}
members {
instance_id {
structured {
most_significant_bits: -1389199217038570244
least_significant_bits: -6286225797498614732
}
}
time_stamp: 16882865215001644
state: Leader
is_alive: true
http_end_point {
address: "127.0.0.1"
port: 2112
}
}
members {
instance_id {
structured {
most_significant_bits: 4006561881348851895
least_significant_bits: -7528605218151893547
}
}
time_stamp: 16882865213735606
state: Follower
is_alive: true
http_end_point {
address: "127.0.0.1"
port: 2111
}
}
The gossip.get_cluster_info endpoint returns a ClusterInfo protobuf message, so you can use it directly. For example, you can check if all your nodes are alive:
Or you can check each node's state (e.g. to understand if one's a Leader or Follower):
Note
Here we're importing MemberInfo from eventstore_grpc.proto.gossip_pb2 to resolve the name of each member's state enum value.
Here you can find the complete reference to EventStoreDB's protobuf definitions.
You can import any of the compiled protos from eventstore_grpc.proto.
Creating events
To produce and consume events we use EventData objects. Each event will have a unique id, a type, some data to represent the event itself. Additionally, you can store some metadata along side if you want to add some "context" to the event that you're story with information that's not part of the event itself, like correlations ids, timestamps, access information.
Note
EventData is roughly equivalent to what EventData represents in other EventStoreDB grpc clients.
You can create your own custom types of EventData, but it's common to use JSON format to store event payloads, which is convenient also to use some of the built-in features of EventStoreDB such as projections. This library provides a JSONEventData class that you can use to create Events with a JSON payload:
Note
In event_1 we didn't provide any event_id. The JSONEventData object will create a uuid.uuid4 id automatically in such cases. Bear in mind that the event_id must be a valid Uuid.
Publishing events
In EventStoreDB when you publish some event you will append it to some stream. You can think of a stream as an ordered collection of events.
All you have to do to append some events to a stream is creating the events and deciding a name for your stream:
We can append a list of events:
Or one at a time:
Reading events
We have different options to read events from a stream, but for this quickstart let's keep it simple and say that we want to read all the events that we just published (i.e from the start of our some-stream stream).
event {
event {
id {
string: "8c253ce9-02ec-42d8-b7df-7607c2dc91d3"
}
stream_identifier {
stream_name: "some-stream"
}
prepare_position: 18446744073709551615
commit_position: 18446744073709551615
metadata {
key: "type"
value: "some_event_occurred"
}
metadata {
key: "created"
value: "16882945222373749"
}
metadata {
key: "content-type"
value: "application/json"
}
custom_metadata: "{}"
data: "{\"foo\": \"bar\"}"
}
no_position {
}
}
event {
event {
id {
string: "40db443a-6244-472b-87c1-e8e87c8a3abf"
}
stream_identifier {
stream_name: "some-stream"
}
stream_revision: 1
prepare_position: 18446744073709551615
commit_position: 18446744073709551615
metadata {
key: "type"
value: "some_other_event_occurred"
}
metadata {
key: "created"
value: "16882945222374129"
}
metadata {
key: "content-type"
value: "application/json"
}
custom_metadata: "{}"
data: "{\"baz\": 42}"
}
no_position {
}
}
event {
event {
id {
string: "db4a5a73-f4ce-4760-974d-97ad5091789c"
}
stream_identifier {
stream_name: "some-stream"
}
stream_revision: 2
prepare_position: 18446744073709551615
commit_position: 18446744073709551615
metadata {
key: "type"
value: "something-happened"
}
metadata {
key: "created"
value: "16882945889580381"
}
metadata {
key: "content-type"
value: "application/json"
}
custom_metadata: "{\"some\": \"custom-metadata\"}"
data: "null"
}
no_position {
}
}
Going further
So far you've learned how to:
- connect to EventStoreDB
- getting information about the nodes of the cluster
- creating some events
- appending events to some stream
- reading from the stream
There's much more you can do with EventStoreDB and we're constantly trying to improve our documentation. We will add some sections to cover intermediate/advanced use cases.
For now, please use the API reference.