Skip to content

Commit 1ff5c83

Browse files
committed
Minor changes
1 parent 5604bd5 commit 1ff5c83

File tree

2 files changed

+7
-7
lines changed

2 files changed

+7
-7
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ Example:
7575
select zson_learn('{{"table1", "col1"}, {"table2", "col2"}}');
7676
```
7777

78-
You can create a temporary table and write some common JSONB documents to it manually or use existing tables. The idea is to provide a subset of real data. Lets say some document *type* is twice as frequent as some other document type. ZSON expects that there will be twice more documents of the first type than of the second in a learning set.
78+
You can create a temporary table and write some common JSONB documents to it manually or use existing tables. The idea is to provide a subset of real data. Lets say some document *type* is twice as frequent as some other document type. ZSON expects that there will be twice as many documents of the first type as those of the second one in a learning set.
7979

8080
Resulting dictionary could be examined using this query:
8181

@@ -97,17 +97,17 @@ zson_test=# select x -> 'aaa' from zson_example;
9797
?column? | 123
9898
```
9999

100-
## Migrating to new dictionary
100+
## Migrating to a new dictionary
101101

102102
When schema of JSONB documents evolve ZSON could be *re-learned*:
103103

104104
```
105105
select zson_learn('{{"table1", "col1"}, {"table2", "col2"}}');
106106
```
107107

108-
This time *second* dictionary will be created. Dictionaries are cached in memory so it will take about a minute before ZSON realizes that there is a new dictionary. After that old documents will be decompressed using old dictionary and new documents will be compressed and decompressed using new dictionary.
108+
This time *second* dictionary will be created. Dictionaries are cached in memory so it will take about a minute before ZSON realizes that there is a new dictionary. After that old documents will be decompressed using the old dictionary and new documents will be compressed and decompressed using the new dictionary.
109109

110-
To find out which dictionary is used for given ZSON document use zson_info procedure:
110+
To find out which dictionary is used for a given ZSON document use zson_info procedure:
111111

112112
```
113113
zson_test=# select zson_info(x) from test_compress where id = 1;
@@ -119,7 +119,7 @@ zson_test=# select zson_info(x) from test_compress where id = 2;
119119
zson_info | zson version = 0, dict version = 0, ...
120120
```
121121

122-
If **all** ZSON documents are migrated to new dictionary the old one could be safely removed:
122+
If **all** ZSON documents are migrated to the new dictionary the old one could be safely removed:
123123

124124
```
125125
delete from zson_dict where dict_id = 0;

docs/benchmark.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -319,9 +319,9 @@ tps = 1086.396431 (excluding connections establishing)
319319

320320
In this case ZSON gives about 11.8% more TPS.
321321

322-
We can modify compress.pgbench and nocompress.pgbench so only documents with id in between of 1 and 3000 will be requested. It will simulate a case when all data *does* fit into memory. In this case we see 141K TPS (JSONB) vs 134K TPS (ZSON) which is 5% slower.
322+
We can modify compress.pgbench and nocompress.pgbench so only the documents with id between 1 and 3000 will be requested. It will simulate a case when all data *does* fit into memory. In this case we see 141K TPS (JSONB) vs 134K TPS (ZSON) which is 5% slower.
323323

324-
Compression ratio could be different depending on documents, database schema, number of rows, etc. But in general ZSON compression is much better than build-in PostgreSQL compression (PGLZ):
324+
The compression ratio could be different depending on the documents, the database schema, the number of rows, etc. But in general ZSON compression is much better than build-in PostgreSQL compression (PGLZ):
325325

326326
```
327327
before | after | ratio

0 commit comments

Comments
 (0)