Re: POC: Cache data in GetSnapshotData()

Lists: pgsql-hackers
From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-01-15 05:53:47
Message-ID: CAD__OugYQmNP5dFkGdSjP48B8WjD0+rtff3iupThnQxTXLEUog@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jan 4, 2016 at 2:28 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:

> I think at the very least the cache should be protected by a separate
> lock, and that lock should be acquired with TryLock. I.e. the cache is
> updated opportunistically. I'd go for an lwlock in the first iteration.

I tried to implement a simple patch which protect the cache. Of all the
backend which
compute the snapshot(when cache is invalid) only one of them will write to
cache.
This is done with one atomic compare and swap operation.

After above fix memory corruption is not visible. But I see some more
failures at higher client sessions(128 it is easily reproducible).

Errors are
DETAIL: Key (bid)=(24) already exists.
STATEMENT: UPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid
= $2;
client 17 aborted in state 11: ERROR: duplicate key value violates unique
constraint "pgbench_branches_pkey"
DETAIL: Key (bid)=(24) already exists.
client 26 aborted in state 11: ERROR: duplicate key value violates unique
constraint "pgbench_branches_pkey"
DETAIL: Key (bid)=(87) already exists.
ERROR: duplicate key value violates unique constraint
"pgbench_branches_pkey"
DETAIL: Key (bid)=(113) already exists.

After some analysis I think In GetSnapshotData() while computing snapshot.
/*
* We don't include our own XIDs (if any) in the snapshot,
but we
* must include them in xmin.
*/
if (NormalTransactionIdPrecedes(xid, xmin))
xmin = xid;
*********** if (pgxact == MyPgXact) ******************
continue;

We do not add our own xid to xip array, I am wondering if other backend try
to use
the same snapshot will it be able to see changes made by me(current
backend).
I think since we compute a snapshot which will be used by other backends we
need to
add our xid to xip array to tell transaction is open.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

Attachment Content-Type Size
cache_snapshot_data_avoid_cuncurrent_write_to_cache.patch text/x-patch 7.4 KB

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-01-15 09:27:53
Message-ID: CAA4eK1JOuZabO64+rK5Xuo4mP15D8WbCkDOpRMnqXOKe2WFJjw@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 15, 2016 at 11:23 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
wrote:

> On Mon, Jan 4, 2016 at 2:28 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> > I think at the very least the cache should be protected by a separate
> > lock, and that lock should be acquired with TryLock. I.e. the cache is
> > updated opportunistically. I'd go for an lwlock in the first iteration.
>
> I tried to implement a simple patch which protect the cache. Of all the
> backend which
> compute the snapshot(when cache is invalid) only one of them will write to
> cache.
> This is done with one atomic compare and swap operation.
>
> After above fix memory corruption is not visible. But I see some more
> failures at higher client sessions(128 it is easily reproducible).
>
>
Don't you think we need to update the snapshot fields like count,
subcount before saving it to shared memory?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-01-16 04:53:14
Message-ID: CAA4eK1L8unie41+bJb91E65xc7-rstGz3RSkJCuj8JK2nt8SGA@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 15, 2016 at 11:23 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
wrote:

> On Mon, Jan 4, 2016 at 2:28 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> > I think at the very least the cache should be protected by a separate
> > lock, and that lock should be acquired with TryLock. I.e. the cache is
> > updated opportunistically. I'd go for an lwlock in the first iteration.
>
> I tried to implement a simple patch which protect the cache. Of all the
> backend which
> compute the snapshot(when cache is invalid) only one of them will write to
> cache.
> This is done with one atomic compare and swap operation.
>
> After above fix memory corruption is not visible. But I see some more
> failures at higher client sessions(128 it is easily reproducible).
>
> Errors are
> DETAIL: Key (bid)=(24) already exists.
> STATEMENT: UPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid
> = $2;
> client 17 aborted in state 11: ERROR: duplicate key value violates unique
> constraint "pgbench_branches_pkey"
> DETAIL: Key (bid)=(24) already exists.
> client 26 aborted in state 11: ERROR: duplicate key value violates unique
> constraint "pgbench_branches_pkey"
> DETAIL: Key (bid)=(87) already exists.
> ERROR: duplicate key value violates unique constraint
> "pgbench_branches_pkey"
> DETAIL: Key (bid)=(113) already exists.
>
> After some analysis I think In GetSnapshotData() while computing snapshot.
> /*
> * We don't include our own XIDs (if any) in the snapshot,
> but we
> * must include them in xmin.
> */
> if (NormalTransactionIdPrecedes(xid, xmin))
> xmin = xid;
> *********** if (pgxact == MyPgXact) ******************
> continue;
>
> We do not add our own xid to xip array, I am wondering if other backend
> try to use
> the same snapshot will it be able to see changes made by me(current
> backend).
> I think since we compute a snapshot which will be used by other backends
> we need to
> add our xid to xip array to tell transaction is open.
>

I also think this observation of yours is right and currently that is
okay because we always first check TransactionIdIsCurrentTransactionId().
Refer comments on top of XidInMVCCSnapshot() [1]. However, I
am not sure if it is good idea to start including current backends xid
in snapshot, because that can lead to including its subxids as well
which can make snapshot size bigger for cases where current transaction
has many sub-transactions. One way could be that we store current
backends transaction and sub-transaction id's only for the saved
snapshot, does that sound reasonable to you?

Other than that, I think patch needs to clear saved snapshot for
Commit Prepared Transaction as well (refer function
FinishPreparedTransaction()).

! else if (!snapshot->takenDuringRecovery)
{
int *pgprocnos = arrayP->pgprocnos;
int numProcs;
+ const uint32 snapshot_cached= 0;

I don't think the const is required for above variable.

[1] -
Note: GetSnapshotData never stores either top xid or subxids of our own
backend into a snapshot, so these xids will not be reported as "running"
by this function. This is OK for current uses, because we always check
TransactionIdIsCurrentTransactionId first, except for known-committed
XIDs which could not be ours anyway.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-02-25 07:27:57
Message-ID: CAD__Oug=HXsGTfSUb1Gns0yMCzScaBmORmZuhi5iErAkXTSdYQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jan 16, 2016 at 10:23 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:

> >On Fri, Jan 15, 2016 at 11:23 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
> wrote:
>
>> On Mon, Jan 4, 2016 at 2:28 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>>
>> >> I think at the very least the cache should be protected by a separate
>> >> lock, and that lock should be acquired with TryLock. I.e. the cache is
>> >> updated opportunistically. I'd go for an lwlock in the first iteration.
>>
>
> >I also think this observation of yours is right and currently that is
> >okay because we always first check TransactionIdIsCurrentTransactionId().
>
> >+ const uint32 snapshot_cached= 0;
>

I have fixed all of the issues reported by regress test. Also now when
backend try to cache the snapshot we also try to store the self-xid and sub
xid, so other backends can use them.

I also did some read-only perf tests.

Non-Default Settings.
================
scale_factor=300.

./postgres -c shared_buffers=16GB -N 200 -c min_wal_size=15GB -c
max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c
checkpoint_completion_target=0.9

./pgbench -c $clients -j $clients -T 300 -M prepared -S postgres

Machine Detail:
cpu : POWER8
cores: 24 (192 with HT).

Clients Base With cached snapshot
1 19653.914409 19926.884664
16 190764.519336 190040.299297
32 339327.881272 354467.445076
48 462632.02493 464767.917813
64 522642.515148 533271.556703
80 515262.813189 513353.962521

But did not see any perf improvement. Will continue testing the same.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

Attachment Content-Type Size
Cache_data_in_GetSnapshotData_POC.patch text/x-patch 7.7 KB

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-02-26 09:25:41
Message-ID: CA+TgmoYOGBBE56ahsrA81Y-E1bzBMAzwJFCsuwncNSjF+W4cTw@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Feb 25, 2016 at 12:57 PM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com> wrote:
> I have fixed all of the issues reported by regress test. Also now when
> backend try to cache the snapshot we also try to store the self-xid and sub
> xid, so other backends can use them.
>
> I also did some read-only perf tests.

I'm not sure what you are doing that keeps breaking threading for
Gmail, but this thread is getting split up for me into multiple
threads with only a few messages in each. The same seems to be
happening in the community archives. Please try to figure out what is
causing that and stop doing it.

I notice you seem to not to have implemented this suggestion by Andres:

http://www.postgresql.org//message-id/[email protected]

Also, you should test this on a machine with more than 2 cores.
Andres's original test seems to have been on a 4-core system, where
this would be more likely to work out.

Also, Andres suggested testing this on an 80-20 write mix, where as
you tested it on 100% read-only. In that case there is no blocking
ProcArrayLock, which reduces the chances that the patch will benefit.

I suspect, too, that the chances of this patch working out have
probably been reduced by 0e141c0fbb211bdd23783afa731e3eef95c9ad7a,
which seems to be targeting mostly the same problem. I mean it's
possible that this patch could win even when no ProcArrayLock-related
blocking is happening, but the original idea seems to have been that
it would help mostly with that case. You could try benchmarking it on
the commit just before that one and then on current sources and see if
you get the same results on both, or if there was more benefit before
that commit.

Also, you could consider repeating the LWLOCK_STATS testing that Amit
did in his original reply to Andres. That would tell you whether the
performance is not increasing because the patch doesn't reduce
ProcArrayLock contention, or whether the performance is not increasing
because the patch DOES reduce ProcArrayLock contention but then the
contention shifts to CLogControlLock or elsewhere.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-01 07:29:39
Message-ID: CAA4eK1+dQQB_Pn5RejTonePvJXYe0LEY4dRPhkkjhBon+5sbnA@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Feb 26, 2016 at 2:55 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Thu, Feb 25, 2016 at 12:57 PM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
wrote:
> > I have fixed all of the issues reported by regress test. Also now when
> > backend try to cache the snapshot we also try to store the self-xid and
sub
> > xid, so other backends can use them.
> >

+ /* Add my own XID to snapshot. */
+ csnap->xip[count] = MyPgXact->xid;
+ csnap->xcnt = count + 1;

Don't we need to add this only when the xid of current transaction is
valid? Also, I think it will be better if we can explain why we need to
add the our own transaction id while caching the snapshot.

> > I also did some read-only perf tests.
>
> I'm not sure what you are doing that keeps breaking threading for
> Gmail, but this thread is getting split up for me into multiple
> threads with only a few messages in each. The same seems to be
> happening in the community archives. Please try to figure out what is
> causing that and stop doing it.
>
> I notice you seem to not to have implemented this suggestion by Andres:
>
>
http://www.postgresql.org//message-id/[email protected]
>

As far as I can understand by reading the patch, it is kind of already
implemented the first suggestion by Andres which is to use try lock, now
the patch is using atomic ops to implement the same instead of using
lwlock, but I think it should show the same impact, do you see any problem
with the same?

Now talking about second suggestion which is to use some form of 'snapshot
slots' to reduce the contention further, it seems that could be beneficial,
if see any gain with try lock. If you think that can benefit over current
approach taken in patch, then we can discuss how to implement the same.

> Also, you should test this on a machine with more than 2 cores.
> Andres's original test seems to have been on a 4-core system, where
> this would be more likely to work out.
>
> Also, Andres suggested testing this on an 80-20 write mix, where as
> you tested it on 100% read-only. In that case there is no blocking
> ProcArrayLock, which reduces the chances that the patch will benefit.
>

+1

>
> Also, you could consider repeating the LWLOCK_STATS testing that Amit
> did in his original reply to Andres. That would tell you whether the
> performance is not increasing because the patch doesn't reduce
> ProcArrayLock contention, or whether the performance is not increasing
> because the patch DOES reduce ProcArrayLock contention but then the
> contention shifts to CLogControlLock or elsewhere.
>

+1

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-03 11:20:17
Message-ID: CAD__OuiwEi5sHe2wwQCK36Ac9QMhvJuqG3CfPN+OFCMb7rdruQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Mar 1, 2016 at 12:59 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>Don't we need to add this only when the xid of current transaction is
valid? Also, I think it will be better if we can explain why we need to
add the our >own transaction id while caching the snapshot.
I have fixed the same thing and patch is attached.

Some more tests done after that

*pgbench write tests: on 8 socket, 64 core machine.*

/postgres -c shared_buffers=16GB -N 200 -c min_wal_size=15GB -c
max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c
checkpoint_completion_target=0.9

./pgbench -c $clients -j $clients -T 1800 -M prepared postgres

[image: Inline image 3]

A small improvement in performance at 64 thread.

*LWLock_Stats data:*

ProcArrayLock: Base.

=================

postgresql-2016-03-01_115252.log:PID 110019 lwlock main 4: shacq 1867601
exacq 35625 blk 134682 spindelay 128 dequeue self 28871

postgresql-2016-03-01_115253.log:PID 110115 lwlock main 4: shacq 2201613
exacq 43489 blk 155499 spindelay 127 dequeue self 32751

postgresql-2016-03-01_115253.log:PID 110122 lwlock main 4: shacq 2231327
exacq 44824 blk 159440 spindelay 128 dequeue self 33336

postgresql-2016-03-01_115254.log:PID 110126 lwlock main 4: shacq 2247983
exacq 44632 blk 158669 spindelay 131 dequeue self 33365

postgresql-2016-03-01_115254.log:PID 110059 lwlock main 4: shacq 2036809
exacq 38607 blk 143538 spindelay 117 dequeue self 31008

ProcArrayLock: With Patch.

=====================

postgresql-2016-03-01_124747.log:PID 1789 lwlock main 4: shacq 2273958
exacq 61605 blk 79581 spindelay 307 dequeue self 66088

postgresql-2016-03-01_124748.log:PID 1880 lwlock main 4: shacq 2456388
exacq 65996 blk 82300 spindelay 470 dequeue self 68770

postgresql-2016-03-01_124748.log:PID 1765 lwlock main 4: shacq 2244083
exacq 60835 blk 79042 spindelay 336 dequeue self 65212

postgresql-2016-03-01_124749.log:PID 1882 lwlock main 4: shacq 2489271
exacq 67043 blk 85650 spindelay 463 dequeue self 68401

postgresql-2016-03-01_124749.log:PID 1753 lwlock main 4: shacq 2232791
exacq 60647 blk 78659 spindelay 364 dequeue self 65180

postgresql-2016-03-01_124750.log:PID 1849 lwlock main 4: shacq 2374922
exacq 64075 blk 81860 spindelay 339 dequeue self 67584

*-------------Block time of ProcArrayLock has reduced significantly.*

ClogControlLock : Base.

===================

postgresql-2016-03-01_115302.log:PID 110040 lwlock main 11: shacq 586129
exacq 268808 blk 90570 spindelay 261 dequeue self 59619

postgresql-2016-03-01_115303.log:PID 110047 lwlock main 11: shacq 593492
exacq 271019 blk 89547 spindelay 268 dequeue self 59999

postgresql-2016-03-01_115303.log:PID 110078 lwlock main 11: shacq 620830
exacq 285244 blk 92939 spindelay 262 dequeue self 61912

postgresql-2016-03-01_115304.log:PID 110083 lwlock main 11: shacq 633101
exacq 289983 blk 93485 spindelay 262 dequeue self 62394

postgresql-2016-03-01_115305.log:PID 110105 lwlock main 11: shacq 646584
exacq 297598 blk 93331 spindelay 312 dequeue self 63279

ClogControlLock : With Patch.

=======================

postgresql-2016-03-01_124730.log:PID 1865 lwlock main 11: shacq 722881
exacq 330273 blk 106163 spindelay 468 dequeue self 80316

postgresql-2016-03-01_124731.log:PID 1857 lwlock main 11: shacq 713720
exacq 327158 blk 106719 spindelay 439 dequeue self 79996

postgresql-2016-03-01_124732.log:PID 1826 lwlock main 11: shacq 696762
exacq 317239 blk 104523 spindelay 448 dequeue self 79374

postgresql-2016-03-01_124732.log:PID 1862 lwlock main 11: shacq 721272
exacq 330350 blk 105965 spindelay 492 dequeue self 81036

postgresql-2016-03-01_124733.log:PID 1879 lwlock main 11: shacq 737398
exacq 335357 blk 105424 spindelay 520 dequeue self 80977

*-------------Block time of ClogControlLock has increased slightly.*

Will continue with further tests on lower clients.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

Attachment Content-Type Size
Cache_data_in_GetSnapshotData_POC_01.patch application/octet-stream 8.4 KB

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-03 18:20:05
Message-ID: CA+TgmoapMXzCGFvjRYRoYAnCwpYf+d2itXx0nA-oSg+rGNOOFQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 3, 2016 at 6:20 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
wrote:

>
> On Tue, Mar 1, 2016 at 12:59 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> >Don't we need to add this only when the xid of current transaction is
> valid? Also, I think it will be better if we can explain why we need to
> add the our >own transaction id while caching the snapshot.
> I have fixed the same thing and patch is attached.
>
> Some more tests done after that
>
> *pgbench write tests: on 8 socket, 64 core machine.*
>
> /postgres -c shared_buffers=16GB -N 200 -c min_wal_size=15GB -c
> max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c
> checkpoint_completion_target=0.9
>
> ./pgbench -c $clients -j $clients -T 1800 -M prepared postgres
>
> [image: Inline image 3]
>
> A small improvement in performance at 64 thread.
>

What if you apply both this and Amit's clog control log patch(es)? Maybe
the combination of the two helps substantially more than either one alone.

Or maybe not, but seems worth checking.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-10 07:34:45
Message-ID: CAD__OuhygfLox2grcL4=jyLjGMsLmPiWqCi05PwrbOoqzqNyYw@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 3, 2016 at 11:50 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>What if you apply both this and Amit's clog control log patch(es)? Maybe
the combination of the two helps substantially more than either >one alone.

I did the above tests along with Amit's clog patch. Machine :8 socket, 64
core. 128 hyperthread.

clients BASE ONLY CLOG CHANGES % Increase ONLY SAVE SNAPSHOT % Increase CLOG
CHANGES + SAVE SNAPSHOT % Increase
64 29247.658034 30855.728835 5.4981181711 29254.532186 0.0235032562
32691.832776 11.7758992463
88 31214.305391 33313.393095 6.7247618606 32109.248609 2.8670931702
35433.655203 13.5173592978
128 30896.673949 34015.362152 10.0939285832 *** *** 34947.296355
13.110221549
256 27183.780921 31192.895437 14.7481857938 *** *** 32873.782735
20.9316056164
With clog changes, perf of caching the snapshot patch improves.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-10 08:06:37
Message-ID: CAA4eK1JD=ta9thMfGfHx06hkK9qf1W5RXWuxwYW8r9YnHzPq0Q@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 10, 2016 at 1:04 PM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
wrote:

>
>
> On Thu, Mar 3, 2016 at 11:50 PM, Robert Haas <robertmhaas(at)gmail(dot)com>
> wrote:
> >What if you apply both this and Amit's clog control log patch(es)? Maybe
> the combination of the two helps substantially more than either >one alone.
>
>
> I did the above tests along with Amit's clog patch. Machine :8 socket, 64
> core. 128 hyperthread.
>
> clients BASE ONLY CLOG CHANGES % Increase ONLY SAVE SNAPSHOT % Increase CLOG
> CHANGES + SAVE SNAPSHOT % Increase
> 64 29247.658034 30855.728835 5.4981181711 29254.532186 0.0235032562
> 32691.832776 11.7758992463
> 88 31214.305391 33313.393095 6.7247618606 32109.248609 2.8670931702
> 35433.655203 13.5173592978
> 128 30896.673949 34015.362152 10.0939285832 *** *** 34947.296355
> 13.110221549
> 256 27183.780921 31192.895437 14.7481857938 *** *** 32873.782735
> 20.9316056164
> With clog changes, perf of caching the snapshot patch improves.
>
>
This data looks promising to me and indicates that saving the snapshot has
benefits and we can see noticeable performance improvement especially once
the CLog contention gets reduced. I wonder if we should try these tests
with unlogged tables, because I suspect most of the contention after
CLogControlLock and ProcArrayLock is for WAL related locks, so you might be
able to see better gain with these patches. Do you think it makes sense to
check the performance by increasing CLOG buffers (patch for same is posted
in Speed up Clog thread [1]) as that also relieves contention on CLOG as
per the tests I have done?

[1] -
http://www.postgresql.org/message-id/CAA4eK1LMMGNQ439BUm0LcS3p0sb8S9kc-cUGU_ThNqMwA8_Tug@mail.gmail.com

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-11 12:44:08
Message-ID: CAD__OuiyXzBa9xM4OeHuEADcsLLq2fMRTNYmYn4Ufw0eWNsu8A@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

Thanks Amit,
I did a quick pgbench write tests for unlogged tables at 88 clients as it
had the peak performance from previous test. There is big jump in TPS due
to clog changes.

clients BASE ONLY CLOG CHANGES % Increase ONLY SAVE SNAPSHOT % Increase CLOG
CHANGES + SAVE SNAPSHOT % Increase
88 36055.425005 52796.618434 46.4318294034 37728.004118 4.6389111008
56025.454917 55.3870323515
Clients + WITH INCREASE IN CLOG BUFFER % increase
88 58217.924138 61.4678626862

I will continue to benchmark above tests with much wider range of clients.

On Thu, Mar 10, 2016 at 1:36 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:

> On Thu, Mar 10, 2016 at 1:04 PM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
> wrote:
>
>>
>>
>> On Thu, Mar 3, 2016 at 11:50 PM, Robert Haas <robertmhaas(at)gmail(dot)com>
>> wrote:
>> >What if you apply both this and Amit's clog control log patch(es)?
>> Maybe the combination of the two helps substantially more than either >one
>> alone.
>>
>>
>> I did the above tests along with Amit's clog patch. Machine :8 socket, 64
>> core. 128 hyperthread.
>>
>> clients BASE ONLY CLOG CHANGES % Increase ONLY SAVE SNAPSHOT % Increase CLOG
>> CHANGES + SAVE SNAPSHOT % Increase
>> 64 29247.658034 30855.728835 5.4981181711 29254.532186 0.0235032562
>> 32691.832776 11.7758992463
>> 88 31214.305391 33313.393095 6.7247618606 32109.248609 2.8670931702
>> 35433.655203 13.5173592978
>> 128 30896.673949 34015.362152 10.0939285832 *** *** 34947.296355
>> 13.110221549
>> 256 27183.780921 31192.895437 14.7481857938 *** *** 32873.782735
>> 20.9316056164
>> With clog changes, perf of caching the snapshot patch improves.
>>
>>
> This data looks promising to me and indicates that saving the snapshot has
> benefits and we can see noticeable performance improvement especially once
> the CLog contention gets reduced. I wonder if we should try these tests
> with unlogged tables, because I suspect most of the contention after
> CLogControlLock and ProcArrayLock is for WAL related locks, so you might be
> able to see better gain with these patches. Do you think it makes sense to
> check the performance by increasing CLOG buffers (patch for same is posted
> in Speed up Clog thread [1]) as that also relieves contention on CLOG as
> per the tests I have done?
>
>
> [1] -
> http://www.postgresql.org/message-id/CAA4eK1LMMGNQ439BUm0LcS3p0sb8S9kc-cUGU_ThNqMwA8_Tug@mail.gmail.com
>
> With Regards,
> Amit Kapila.
> EnterpriseDB: http://www.enterprisedb.com
>

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-16 08:40:21
Message-ID: CAD__OujrdwQdJdoVHahQLDg-6ivu6iBCi9iJ1qPu6AtUQpL4UQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 3, 2016 at 6:20 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
wrote:
> I will continue to benchmark above tests with much wider range of clients.

Latest Benchmarking shows following results for unlogged tables.

clients BASE ONLY CLOG CHANGES % Increase CLOG CHANGES + SAVE SNAPSHOT %
Increase
1 1198.326337 1328.069656 10.8270439357 1234.078342 2.9834948875
32 37455.181727 38295.250519 2.2428640131 41023.126293 9.5259037641
64 48838.016451 50675.845885 3.7631123611 51662.814319 5.7840143259
88 36878.187766 53173.577363 44.1870671639 56025.454917 51.9203038731
128 35901.537773 52026.024098 44.9130798434 53864.486733 50.0339263281
256 28130.354402 46793.134156 66.3439197647 46817.04602 66.4289235427
Performance diff in 1 client seems just a run to run variance.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-16 17:29:22
Message-ID: CA+TgmoYVdqPwsY_VpQoP4O2ZtQJ2-AOfJ1oMnK5Gi2EbMqE=BQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Mar 16, 2016 at 4:40 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
wrote:

>
> On Thu, Mar 3, 2016 at 6:20 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
> wrote:
> > I will continue to benchmark above tests with much wider range of
> clients.
>
> Latest Benchmarking shows following results for unlogged tables.
>
> clients BASE ONLY CLOG CHANGES % Increase CLOG CHANGES + SAVE SNAPSHOT %
> Increase
> 1 1198.326337 1328.069656 10.8270439357 1234.078342 2.9834948875
> 32 37455.181727 38295.250519 2.2428640131 41023.126293 9.5259037641
> 64 48838.016451 50675.845885 3.7631123611 51662.814319 5.7840143259
> 88 36878.187766 53173.577363 44.1870671639 56025.454917 51.9203038731
> 128 35901.537773 52026.024098 44.9130798434 53864.486733 50.0339263281
> 256 28130.354402 46793.134156 66.3439197647 46817.04602 66.4289235427
>

Whoa. At 64 clients, we're hardly getting any benefit, but then by 88
clients, we're getting a huge benefit. I wonder why there's that sharp
change there.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-16 17:33:26
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On 2016-03-16 13:29:22 -0400, Robert Haas wrote:
> Whoa. At 64 clients, we're hardly getting any benefit, but then by 88
> clients, we're getting a huge benefit. I wonder why there's that sharp
> change there.

What's the specifics of the machine tested? I wonder if it either
correlates with the number of hardware threads, real cores, or cache
sizes.

- Andres


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-16 17:45:24
Message-ID: CA+TgmoYW-62XAXZUPHctwHL5sDxvqcqZErurRiFhAw=NBFkMnQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Mar 16, 2016 at 1:33 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2016-03-16 13:29:22 -0400, Robert Haas wrote:
>> Whoa. At 64 clients, we're hardly getting any benefit, but then by 88
>> clients, we're getting a huge benefit. I wonder why there's that sharp
>> change there.
>
> What's the specifics of the machine tested? I wonder if it either
> correlates with the number of hardware threads, real cores, or cache
> sizes.

I think this was done on cthulhu, whose /proc/cpuinfo output ends this way:

processor : 127
vendor_id : GenuineIntel
cpu family : 6
model : 47
model name : Intel(R) Xeon(R) CPU E7- 8830 @ 2.13GHz
stepping : 2
microcode : 0x37
cpu MHz : 2129.000
cache size : 24576 KB
physical id : 3
siblings : 16
core id : 25
cpu cores : 8
apicid : 243
initial apicid : 243
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts
rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64
monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1
sse4_2 x2apic popcnt aes lahf_lm ida arat epb dtherm tpr_shadow vnmi
flexpriority ept vpid
bogomips : 4266.62
clflush size : 64
cache_alignment : 64
address sizes : 44 bits physical, 48 bits virtual
power management:

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-17 03:30:39
Message-ID: CAA4eK1LBOQ4e3Ycge+Fe0euzVu89CqGTuGNeajOienxJR0AEKA@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Mar 16, 2016 at 10:59 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> On Wed, Mar 16, 2016 at 4:40 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
> wrote:
>
>>
>> On Thu, Mar 3, 2016 at 6:20 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
>> wrote:
>> > I will continue to benchmark above tests with much wider range of
>> clients.
>>
>> Latest Benchmarking shows following results for unlogged tables.
>>
>> clients BASE ONLY CLOG CHANGES % Increase CLOG CHANGES + SAVE SNAPSHOT %
>> Increase
>> 1 1198.326337 1328.069656 10.8270439357 1234.078342 2.9834948875
>> 32 37455.181727 38295.250519 2.2428640131 41023.126293 9.5259037641
>> 64 48838.016451 50675.845885 3.7631123611 51662.814319 5.7840143259
>> 88 36878.187766 53173.577363 44.1870671639 56025.454917 51.9203038731
>> 128 35901.537773 52026.024098 44.9130798434 53864.486733 50.0339263281
>> 256 28130.354402 46793.134156 66.3439197647 46817.04602 66.4289235427
>>
>
> Whoa. At 64 clients, we're hardly getting any benefit, but then by 88
> clients, we're getting a huge benefit. I wonder why there's that sharp
> change there.
>
>
If you see, for the Base readings, there is a performance increase up till
64 clients and then there is a fall at 88 clients, which to me indicates
that it hits very high-contention around CLogControlLock at 88 clients
which CLog patch is able to control to a great degree (if required, I think
the same can be verified by LWLock stats data). One reason for hitting
contention at 88 clients is that this machine seems to have 64-cores (it
has 8 sockets and 8 Core(s) per socket) as per below information of lscpu
command.

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 8
NUMA node(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 47
Model name: Intel(R) Xeon(R) CPU E7- 8830 @ 2.13GHz
Stepping: 2
CPU MHz: 1064.000
BogoMIPS: 4266.62
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 24576K
NUMA node0 CPU(s): 0,65-71,96-103
NUMA node1 CPU(s): 72-79,104-111
NUMA node2 CPU(s): 80-87,112-119
NUMA node3 CPU(s): 88-95,120-127
NUMA node4 CPU(s): 1-8,33-40
NUMA node5 CPU(s): 9-16,41-48
NUMA node6 CPU(s): 17-24,49-56
NUMA node7 CPU(s): 25-32,57-64

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-18 10:54:31
Message-ID: CAD__Ouh6PahJ+q1mzzjDzLo4v9GjyufegtJNAyXc0_Lfh-4coQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 17, 2016 at 9:00 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>If you see, for the Base readings, there is a performance increase up till
64 clients and then there is a fall at 88 clients, which to me >indicates
that it hits very high-contention around CLogControlLock at 88 clients
which CLog patch is able to control to a great degree (if >required, I
think the same can be verified by LWLock stats data). One reason for
hitting contention at 88 clients is that this machine >seems to have
64-cores (it has 8 sockets and 8 Core(s) per socket) as per below
information of lscpu command.

I am attaching LWLock stats data for above test setups(unlogged tables).

*BASE* At 64 clients Block-time unit At 88 clients Block-time unit
ProcArrayLock 182946 117827
ClogControlLock 107420 120266
*clog patch*

ProcArrayLock 183663 121215
ClogControlLock 72806 65220
*clog patch + save snapshot*

ProcArrayLock 128260 83356
ClogControlLock 78921 74011
This is for unlogged lables, I mainly see ProcArrayLock have higher
contention at 64 clients and at 88 contention is slightly moved to other
entities.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

Attachment Content-Type Size
lwlock details.odt application/vnd.oasis.opendocument.text 26.6 KB

From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-03-22 08:42:54
Message-ID: CAD__Ouic1Tvnwqm6Wf6j7Cz1Kk1DQgmy0isC7=OgX+3JtfGk9g@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Mar 10, 2016 at 1:36 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
wrote:
>Do you think it makes sense to check the performance by increasing CLOG
buffers (patch for same is posted in Speed up Clog thread [1]) >as that
also relieves contention on CLOG as per the tests I have done?

Along with clog patches and save snapshot, I also applied Amit's increase
clog buffer patch. Below is the benchmark results.

clients BASE + WITH INCREASE IN CLOG BUFFER Patch % Increase
1 1198.326337 1275.461955 6.4369458985
32 37455.181727 41239.13593 10.102618726
64 48838.016451 56362.163626 15.4063324471
88 36878.187766 58217.924138 57.8654691695
128 35901.537773 58239.783982 62.22086182
256 28130.354402 56417.133939 100.5560723934

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2016-04-08 06:43:25
Message-ID: CA+TgmobemA6zL35DDf98ZWKEV_XFDTa+=3C=3Uh=D2biK=qOuw@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Mar 22, 2016 at 4:42 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com> wrote:
> On Thu, Mar 10, 2016 at 1:36 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >Do you think it makes sense to check the performance by increasing CLOG buffers (patch for same is posted in Speed up Clog thread [1]) >as that also relieves contention on CLOG as per the tests I have done?
>
> Along with clog patches and save snapshot, I also applied Amit's increase clog buffer patch. Below is the benchmark results.

I think that we really shouldn't do anything about this patch until
after the CLOG stuff is settled, which it isn't yet. So I'm going to
mark this Returned with Feedback; let's reconsider it for 9.7.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-07-10 04:43:56
Message-ID: CAD__OugTRzzdb1543EU+-Q-UrhvOXsBCk9by=13fbQPiZxwAWA@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Apr 8, 2016 at 12:13 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I think that we really shouldn't do anything about this patch until
> after the CLOG stuff is settled, which it isn't yet. So I'm going to
> mark this Returned with Feedback; let's reconsider it for 9.7.

I am updating a rebased patch have tried to benchmark again could see
good improvement in the pgbench read-only case at very high clients on
our cthulhu (8 nodes, 128 hyper thread machines) and power2 (4 nodes,
192 hyper threads) machine. There is some issue with base code
benchmarking which is somehow not consistent so once I could figure
out what is the issue with that I will update

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

Attachment Content-Type Size
Cache_data_in_GetSnapshotData_POC_02.patch application/octet-stream 8.4 KB

From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-08-02 10:12:54
Message-ID: CAD__OujBZKT01MaEc61ZmskdDw1D=AkTP-5JcOP+P4wMvgUyKQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

I have made few more changes with the new patch.

1. Ran pgindent.
2. Instead of an atomic state variable to make only one process cache
the snapshot in shared memory, I have used conditional try lwlock.
With this, we have a small and reliable code.
3. Performance benchmarking

Machine - cthulhu
==============
[mithun(dot)cy(at)cthulhu bin]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 8
NUMA node(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 47
Model name: Intel(R) Xeon(R) CPU E7- 8830 @ 2.13GHz
Stepping: 2
CPU MHz: 1197.000
BogoMIPS: 4266.63
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 24576K
NUMA node0 CPU(s): 0,65-71,96-103
NUMA node1 CPU(s): 72-79,104-111
NUMA node2 CPU(s): 80-87,112-119
NUMA node3 CPU(s): 88-95,120-127
NUMA node4 CPU(s): 1-8,33-40
NUMA node5 CPU(s): 9-16,41-48
NUMA node6 CPU(s): 17-24,49-56
NUMA node7 CPU(s): 25-32,57-64

Server configuration:
./postgres -c shared_buffers=8GB -N 300 -c min_wal_size=15GB -c
max_wal_size=20GB -c checkpoint_timeout=900 -c
maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 -c
wal_buffers=256MB &

pgbench configuration:
scale_factor = 300
./pgbench -c $threads -j $threads -T $time_for_reading -M prepared -S postgres

The machine has 64 cores with this patch I can see server starts
improvement after 64 clients. I have tested up to 256 clients. Which
shows performance improvement nearly max 39%.

Alternatively, I thought instead of storing the snapshot in a shared
memory each backend can hold on to its previously computed snapshot
until next commit/rollback happens in the system. We can have a global
counter value associated with the snapshot when ever it is computed.
Upon any new end of the transaction, the global counter will be
incremented. So when a process wants a new snapshot it can compare
these 2 values to check if it can use previously computed snapshot.
This makes code significantly simple. With the first approach, one
process has to compute and store the snapshot for every end of the
transaction and others can reuse the cached the snapshot. In the
second approach, every process has to re-compute the snapshot. So I am
keeping with the same approach.

On Mon, Jul 10, 2017 at 10:13 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com> wrote:
> On Fri, Apr 8, 2016 at 12:13 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> I think that we really shouldn't do anything about this patch until
>> after the CLOG stuff is settled, which it isn't yet. So I'm going to
>> mark this Returned with Feedback; let's reconsider it for 9.7.
>
> I am updating a rebased patch have tried to benchmark again could see
> good improvement in the pgbench read-only case at very high clients on
> our cthulhu (8 nodes, 128 hyper thread machines) and power2 (4 nodes,
> 192 hyper threads) machine. There is some issue with base code
> benchmarking which is somehow not consistent so once I could figure
> out what is the issue with that I will update
>
> --
> Thanks and Regards
> Mithun C Y
> EnterpriseDB: http://www.enterprisedb.com

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

Attachment Content-Type Size
cache_the_snapshot_performance.ods application/vnd.oasis.opendocument.spreadsheet 17.8 KB
Cache_data_in_GetSnapshotData_POC_03.patch application/octet-stream 12.7 KB

From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-08-02 10:26:21
Message-ID: CAD__Ouj2PX9+TyBGKpUDwvRBP3_o-PA3dMVg9RFvSuatPE0S4w@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Aug 2, 2017 at 3:42 PM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com> wrote:
Sorry, there was an unnecessary header included in proc.c which should
be removed adding the corrected patch.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

Attachment Content-Type Size
Cache_data_in_GetSnapshotData_POC_04.patch application/octet-stream 12.5 KB

From: Andres Freund <andres(at)anarazel(dot)de>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-08-02 20:46:41
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

I think this patch should have a "cover letter" explaining what it's
trying to achieve, how it does so and why it's safe/correct. I think
it'd also be good to try to show some of the worst cases of this patch
(i.e. where the caching only adds overhead, but never gets used), and
some of the best cases (where the cache gets used quite often, and
reduces contention significantly).

I wonder if there's a way to avoid copying the snapshot into the cache
in situations where we're barely ever going to need it. But right now
the only way I can think of is to look at the length of the
ProcArrayLock wait queue and count the exclusive locks therein - which'd
be expensive and intrusive...

On 2017-08-02 15:56:21 +0530, Mithun Cy wrote:
> diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
> index a7e8cf2..4d7debe 100644
> --- a/src/backend/storage/ipc/procarray.c
> +++ b/src/backend/storage/ipc/procarray.c
> @@ -366,11 +366,13 @@ ProcArrayRemove(PGPROC *proc, TransactionId latestXid)
> (arrayP->numProcs - index - 1) * sizeof(int));
> arrayP->pgprocnos[arrayP->numProcs - 1] = -1; /* for debugging */
> arrayP->numProcs--;
> + ProcGlobal->is_snapshot_valid = false;
> LWLockRelease(ProcArrayLock);
> return;
> }
> }
>
> + ProcGlobal->is_snapshot_valid = false;
> /* Oops */
> LWLockRelease(ProcArrayLock);

Why do we need to do anything here? And if we need to, why not in
ProcArrayAdd?

> @@ -1552,6 +1557,8 @@ GetSnapshotData(Snapshot snapshot)
> errmsg("out of memory")));
> }
>
> + snapshot->takenDuringRecovery = RecoveryInProgress();
> +
> /*
> * It is sufficient to get shared lock on ProcArrayLock, even if we are
> * going to set MyPgXact->xmin.
> @@ -1566,100 +1573,200 @@ GetSnapshotData(Snapshot snapshot)
> /* initialize xmin calculation with xmax */
> globalxmin = xmin = xmax;
>
> - snapshot->takenDuringRecovery = RecoveryInProgress();
> -

This movement of code seems fairly unrelated?

> if (!snapshot->takenDuringRecovery)
> {

Do we really need to restrict this to !recovery snapshots? It'd be nicer
if we could generalize it - I don't immediately see anything !recovery
specific in the logic here.

> - int *pgprocnos = arrayP->pgprocnos;
> - int numProcs;
> + if (ProcGlobal->is_snapshot_valid)
> + {
> + Snapshot csnap = &ProcGlobal->cached_snapshot;
> + TransactionId *saved_xip;
> + TransactionId *saved_subxip;
>
> - /*
> - * Spin over procArray checking xid, xmin, and subxids. The goal is
> - * to gather all active xids, find the lowest xmin, and try to record
> - * subxids.
> - */
> - numProcs = arrayP->numProcs;
> - for (index = 0; index < numProcs; index++)
> + saved_xip = snapshot->xip;
> + saved_subxip = snapshot->subxip;
> +
> + memcpy(snapshot, csnap, sizeof(SnapshotData));
> +
> + snapshot->xip = saved_xip;
> + snapshot->subxip = saved_subxip;
> +
> + memcpy(snapshot->xip, csnap->xip,
> + sizeof(TransactionId) * csnap->xcnt);
> + memcpy(snapshot->subxip, csnap->subxip,
> + sizeof(TransactionId) * csnap->subxcnt);
> + globalxmin = ProcGlobal->cached_snapshot_globalxmin;
> + xmin = csnap->xmin;
> +
> + count = csnap->xcnt;
> + subcount = csnap->subxcnt;
> + suboverflowed = csnap->suboverflowed;
> +
> + Assert(TransactionIdIsValid(globalxmin));
> + Assert(TransactionIdIsValid(xmin));
> + }

Can we move this to a separate function? In fact, could you check how
much effort it'd be to split cached, !recovery, recovery cases into
three static inline helper functions? This is starting to be hard to
read, the indentation added in this patch doesn't help... Consider doing
recovery, !recovery cases in a preliminary separate patch.

> + * Let only one of the caller cache the computed snapshot, and
> + * others can continue as before.
> */
> - if (!suboverflowed)
> + if (!ProcGlobal->is_snapshot_valid &&
> + LWLockConditionalAcquire(&ProcGlobal->CachedSnapshotLock,
> + LW_EXCLUSIVE))
> {

I'd also move this to a helper function (the bit inside the lock).

> +
> + /*
> + * The memory barrier has be to be placed here to ensure that
> + * is_snapshot_valid is set only after snapshot is cached.
> + */
> + pg_write_barrier();
> + ProcGlobal->is_snapshot_valid = true;
> + LWLockRelease(&ProcGlobal->CachedSnapshotLock);

LWLockRelease() is a full memory barrier, so this shouldn't be required.

> /*
> diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
> index 7dbaa81..7d06904 100644
> --- a/src/include/storage/proc.h
> +++ b/src/include/storage/proc.h
> @@ -16,6 +16,7 @@
>
> #include "access/xlogdefs.h"
> #include "lib/ilist.h"
> +#include "utils/snapshot.h"
> #include "storage/latch.h"
> #include "storage/lock.h"
> #include "storage/pg_sema.h"
> @@ -253,6 +254,22 @@ typedef struct PROC_HDR
> int startupProcPid;
> /* Buffer id of the buffer that Startup process waits for pin on, or -1 */
> int startupBufferPinWaitBufId;
> +
> + /*
> + * In GetSnapshotData we can reuse the previously snapshot computed if no
> + * new transaction has committed or rolledback. Thus saving good amount of
> + * computation cycle under GetSnapshotData where we need to iterate
> + * through procArray every time we try to get the current snapshot. Below
> + * members help us to save the previously computed snapshot in global
> + * shared memory and any process which want to get current snapshot can
> + * directly copy from them if it is still valid.
> + */
> + SnapshotData cached_snapshot; /* Previously saved snapshot */
> + volatile bool is_snapshot_valid; /* is above snapshot valid */
> + TransactionId cached_snapshot_globalxmin; /* globalxmin when above

I'd rename is_snapshot_valid to 'cached_snapshot_valid' or such, it's
not clear from the name what it refers to.

> + LWLock CachedSnapshotLock; /* A try lock to make sure only one
> + * process write to cached_snapshot */
> } PROC_HDR;
>

Hm, I'd not add the lock to the struct, we normally don't do that for
the "in core" lwlocks that don't exist in a "configurable" number like
the buffer locks and such.

Regards,

Andres


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-08-04 02:15:17
Message-ID: CAD__Oug+sgmorNkAASmvMXHTm=65ZJMKE4pcPpnc7Dem5UUhXQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On 3 Aug 2017 2:16 am, "Andres Freund" <andres(at)anarazel(dot)de> wrote:

Hi Andres thanks for detailed review. I agree with all of the comments. I
am going for a vacation. Once I come back I will fix those comments and
will submit a new patch.


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-08-29 05:57:57
Message-ID: CAD__OujRqABhiAJLcPVBJCvDWBWVotb7kxm+ECppTa+nyfETXA@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

Cache the SnapshotData for reuse:
===========================
In one of our perf analysis using perf tool it showed GetSnapshotData
takes a very high percentage of CPU cycles on readonly workload when
there is very high number of concurrent connections >= 64.

Machine : cthulhu 8 node machine.
----------------------------------------------
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 8
NUMA node(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 47
Model name: Intel(R) Xeon(R) CPU E7- 8830 @ 2.13GHz
Stepping: 2
CPU MHz: 2128.000
BogoMIPS: 4266.63
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 24576K
NUMA node0 CPU(s): 0,65-71,96-103
NUMA node1 CPU(s): 72-79,104-111
NUMA node2 CPU(s): 80-87,112-119
NUMA node3 CPU(s): 88-95,120-127
NUMA node4 CPU(s): 1-8,33-40
NUMA node5 CPU(s): 9-16,41-48
NUMA node6 CPU(s): 17-24,49-56
NUMA node7 CPU(s): 25-32,57-64

Perf CPU cycle 128 clients.

On further analysis, it appears this mainly accounts to cache-misses
happening while iterating through large proc array to compute the
SnapshotData. Also, it seems mainly on read-only (read heavy) workload
SnapshotData mostly remains same as no new(rarely a) transaction
commits or rollbacks. Taking advantage of this we can save the
previously computed SanspshotData in shared memory and in
GetSnapshotData we can use the saved SnapshotData instead of computing
same when it is still valid. A saved SnapsotData is valid until no
transaction end.

[Perf analysis Base]

Samples: 421K of event 'cache-misses', Event count (approx.): 160069274
Overhead Command Shared Object Symbol
18.63% postgres postgres [.] GetSnapshotData
11.98% postgres postgres [.] _bt_compare
10.20% postgres postgres [.] PinBuffer
8.63% postgres postgres [.] LWLockAttemptLock
6.50% postgres postgres [.] LWLockRelease

[Perf analysis with Patch]

Server configuration:
./postgres -c shared_buffers=8GB -N 300 -c min_wal_size=15GB -c
max_wal_size=20GB -c checkpoint_timeout=900 -c
maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 -c
wal_buffers=256MB &

pgbench configuration:
scale_factor = 300
./pgbench -c $threads -j $threads -T $time_for_reading -M prepared -S postgres

The machine has 64 cores with this patch I can see server starts
improvement after 64 clients. I have tested up to 256 clients. Which
shows performance improvement nearly max 39% [1]. This is the best
case for the patch where once computed snapshotData is reused further.

The worst case seems to be small, quick write transactions with which
frequently invalidate the cached SnapshotData before it is reused by
any next GetSnapshotData call. As of now, I tested simple update
tests: pgbench -M Prepared -N on the same machine with the above
server configuration. I do not see much change in TPS numbers.

All TPS are median of 3 runs
Clients TPS-With Patch 05 TPS-Base %Diff
1 752.461117 755.186777 -0.3%
64 32171.296537 31202.153576 +3.1%
128 41059.660769 40061.929658 +2.49%

I will do some profiling and find out why this case is not costing us
some performance due to caching overhead.

[]

On Thu, Aug 3, 2017 at 2:16 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Hi,
>
> I think this patch should have a "cover letter" explaining what it's
> trying to achieve, how it does so and why it's safe/correct. I think
> it'd also be good to try to show some of the worst cases of this patch
> (i.e. where the caching only adds overhead, but never gets used), and
> some of the best cases (where the cache gets used quite often, and
> reduces contention significantly).
>
> I wonder if there's a way to avoid copying the snapshot into the cache
> in situations where we're barely ever going to need it. But right now
> the only way I can think of is to look at the length of the
> ProcArrayLock wait queue and count the exclusive locks therein - which'd
> be expensive and intrusive...
>
>
> On 2017-08-02 15:56:21 +0530, Mithun Cy wrote:
>> diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
>> index a7e8cf2..4d7debe 100644
>> --- a/src/backend/storage/ipc/procarray.c
>> +++ b/src/backend/storage/ipc/procarray.c
>> @@ -366,11 +366,13 @@ ProcArrayRemove(PGPROC *proc, TransactionId latestXid)
>> (arrayP->numProcs - index - 1) * sizeof(int));
>> arrayP->pgprocnos[arrayP->numProcs - 1] = -1; /* for debugging */
>> arrayP->numProcs--;
>> + ProcGlobal->is_snapshot_valid = false;
>> LWLockRelease(ProcArrayLock);
>> return;
>> }
>> }
>>
>> + ProcGlobal->is_snapshot_valid = false;
>> /* Oops */
>> LWLockRelease(ProcArrayLock);
>
> Why do we need to do anything here? And if we need to, why not in
;> ProcArrayAdd?

In ProcArrayRemove I can see ShmemVariableCache->latestCompletedXid is
set to latestXid which is going to change xmax in GetSnapshotData,
hence invalidated the snapshot there. ProcArrayAdd do not seem to be
affecting snapshotData. I have modified now to set
cached_snapshot_valid to false just below the statement
ShmemVariableCache->latestCompletedXid = latestXid only.
+if (TransactionIdIsValid(latestXid))
+{
+ Assert(TransactionIdIsValid(allPgXact[proc->pgprocno].xid));
+ /* Advance global latestCompletedXid while holding the lock */
+ if (TransactionIdPrecedes(ShmemVariableCache->latestCompletedXid,
+ latestXid))
+ ShmemVariableCache->latestCompletedXid = latestXid;

>> @@ -1552,6 +1557,8 @@ GetSnapshotData(Snapshot snapshot)
>> errmsg("out of memory")));
>> }
>>
>> + snapshot->takenDuringRecovery = RecoveryInProgress();
>> +
>> /*
>> * It is sufficient to get shared lock on ProcArrayLock, even if we are
>> * going to set MyPgXact->xmin.
>> @@ -1566,100 +1573,200 @@ GetSnapshotData(Snapshot snapshot)
>> /* initialize xmin calculation with xmax */
>> globalxmin = xmin = xmax;
>>
>> - snapshot->takenDuringRecovery = RecoveryInProgress();
>> -
>
> This movement of code seems fairly unrelated?

-- Yes not related, It was part of early POC patch so retained it as
it is. Now reverted those changes moved them back under the
ProcArrayLock

>> if (!snapshot->takenDuringRecovery)
>> {
>
> Do we really need to restrict this to !recovery snapshots? It'd be nicer
> if we could generalize it - I don't immediately see anything !recovery
> specific in the logic here.

Agree I will add one more patch on this to include standby(recovery
state) case also to unify all the cases where we can cache the
snapshot. Before let me see

>> - int *pgprocnos = arrayP->pgprocnos;
>> - int numProcs;
>> + if (ProcGlobal->is_snapshot_valid)
>> + {
>> + Snapshot csnap = &ProcGlobal->cached_snapshot;
>> + TransactionId *saved_xip;
>> + TransactionId *saved_subxip;
>>
>> - /*
>> - * Spin over procArray checking xid, xmin, and subxids. The goal is
>> - * to gather all active xids, find the lowest xmin, and try to record
>> - * subxids.
>> - */
>> - numProcs = arrayP->numProcs;
>> - for (index = 0; index < numProcs; index++)
>> + saved_xip = snapshot->xip;
>> + saved_subxip = snapshot->subxip;
>> +
>> + memcpy(snapshot, csnap, sizeof(SnapshotData));
>> +
>> + snapshot->xip = saved_xip;
>> + snapshot->subxip = saved_subxip;
>> +
>> + memcpy(snapshot->xip, csnap->xip,
>> + sizeof(TransactionId) * csnap->xcnt);
>> + memcpy(snapshot->subxip, csnap->subxip,
>> + sizeof(TransactionId) * csnap->subxcnt);
>> + globalxmin = ProcGlobal->cached_snapshot_globalxmin;
>> + xmin = csnap->xmin;
>> +
>> + count = csnap->xcnt;
>> + subcount = csnap->subxcnt;
>> + suboverflowed = csnap->suboverflowed;
>> +
>> + Assert(TransactionIdIsValid(globalxmin));
>> + Assert(TransactionIdIsValid(xmin));
>> + }
>
> Can we move this to a separate function? In fact, could you check how
> much effort it'd be to split cached, !recovery, recovery cases into
> three static inline helper functions? This is starting to be hard to
> read, the indentation added in this patch doesn't help... Consider doing
> recovery, !recovery cases in a preliminary separate patch.

In this patch, I have moved the code 2 functions one to cache the
snapshot data another to get the snapshot data from the cache. In the
next patch, I will try to unify these things with recovery (standby)
case. For recovery case, we are copying the xids from a simple xid
array KnownAssignedXids[], I am not completely sure whether caching
them bring similar performance benefits as above so I will also try to
get a perf report for same.

>
>> + * Let only one of the caller cache the computed snapshot, and
>> + * others can continue as before.
>> */
>> - if (!suboverflowed)
>> + if (!ProcGlobal->is_snapshot_valid &&
>> + LWLockConditionalAcquire(&ProcGlobal->CachedSnapshotLock,
>> + LW_EXCLUSIVE))
>> {
>
> I'd also move this to a helper function (the bit inside the lock).

Fixed as suggested.

>> +
>> + /*
>> + * The memory barrier has be to be placed here to ensure that
>> + * is_snapshot_valid is set only after snapshot is cached.
>> + */
>> + pg_write_barrier();
>> + ProcGlobal->is_snapshot_valid = true;
>> + LWLockRelease(&ProcGlobal->CachedSnapshotLock);
>
> LWLockRelease() is a full memory barrier, so this shouldn't be required.

Okay, Sorry for my understanding, Do you want me to set
ProcGlobal->is_snapshot_valid after
LWLockRelease(&ProcGlobal->CachedSnapshotLock)?

>
>> /*
>> diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
>> index 7dbaa81..7d06904 100644
>> --- a/src/include/storage/proc.h
>> +++ b/src/include/storage/proc.h
>> @@ -16,6 +16,7 @@
>>
>> #include "access/xlogdefs.h"
>> #include "lib/ilist.h"
>> +#include "utils/snapshot.h"
>> #include "storage/latch.h"
>> #include "storage/lock.h"
>> #include "storage/pg_sema.h"
>> @@ -253,6 +254,22 @@ typedef struct PROC_HDR
>> int startupProcPid;
>> /* Buffer id of the buffer that Startup process waits for pin on, or -1 */
>> int startupBufferPinWaitBufId;
>> +
>> + /*
>> + * In GetSnapshotData we can reuse the previously snapshot computed if no
>> + * new transaction has committed or rolledback. Thus saving good amount of
>> + * computation cycle under GetSnapshotData where we need to iterate
>> + * through procArray every time we try to get the current snapshot. Below
>> + * members help us to save the previously computed snapshot in global
>> + * shared memory and any process which want to get current snapshot can
>> + * directly copy from them if it is still valid.
>> + */
>> + SnapshotData cached_snapshot; /* Previously saved snapshot */
>> + volatile bool is_snapshot_valid; /* is above snapshot valid */
>> + TransactionId cached_snapshot_globalxmin; /* globalxmin when above
>
>
> I'd rename is_snapshot_valid to 'cached_snapshot_valid' or such, it's
> not clear from the name what it refers to.

-- Renamed to cached_snapshot_valid.

>
>> + LWLock CachedSnapshotLock; /* A try lock to make sure only one
>> + * process write to cached_snapshot */
>> } PROC_HDR;
>>
>
> Hm, I'd not add the lock to the struct, we normally don't do that for
> the "in core" lwlocks that don't exist in a "configurable" number like
> the buffer locks and such.
>

-- I am not really sure about this. Can you please help what exactly I
should be doing here to get this corrected? Is that I have to add it
to lwlocknames.txt as a new LWLock?

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-08-29 07:09:42
Message-ID: CAD__OugfQeL-8irv2OavhGAYJvU7VQkqe-WtT5r_X4TKxBU-kQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

Hi all please ignore this mail, this email is incomplete I have to add
more information and Sorry accidentally I pressed the send button
while replying.

On Tue, Aug 29, 2017 at 11:27 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com> wrote:
> Cache the SnapshotData for reuse:
> ===========================
> In one of our perf analysis using perf tool it showed GetSnapshotData
> takes a very high percentage of CPU cycles on readonly workload when
> there is very high number of concurrent connections >= 64.
>
> Machine : cthulhu 8 node machine.
> ----------------------------------------------
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Byte Order: Little Endian
> CPU(s): 128
> On-line CPU(s) list: 0-127
> Thread(s) per core: 2
> Core(s) per socket: 8
> Socket(s): 8
> NUMA node(s): 8
> Vendor ID: GenuineIntel
> CPU family: 6
> Model: 47
> Model name: Intel(R) Xeon(R) CPU E7- 8830 @ 2.13GHz
> Stepping: 2
> CPU MHz: 2128.000
> BogoMIPS: 4266.63
> Virtualization: VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 256K
> L3 cache: 24576K
> NUMA node0 CPU(s): 0,65-71,96-103
> NUMA node1 CPU(s): 72-79,104-111
> NUMA node2 CPU(s): 80-87,112-119
> NUMA node3 CPU(s): 88-95,120-127
> NUMA node4 CPU(s): 1-8,33-40
> NUMA node5 CPU(s): 9-16,41-48
> NUMA node6 CPU(s): 17-24,49-56
> NUMA node7 CPU(s): 25-32,57-64
>
> Perf CPU cycle 128 clients.
>
> On further analysis, it appears this mainly accounts to cache-misses
> happening while iterating through large proc array to compute the
> SnapshotData. Also, it seems mainly on read-only (read heavy) workload
> SnapshotData mostly remains same as no new(rarely a) transaction
> commits or rollbacks. Taking advantage of this we can save the
> previously computed SanspshotData in shared memory and in
> GetSnapshotData we can use the saved SnapshotData instead of computing
> same when it is still valid. A saved SnapsotData is valid until no
> transaction end.
>
> [Perf analysis Base]
>
> Samples: 421K of event 'cache-misses', Event count (approx.): 160069274
> Overhead Command Shared Object Symbol
> 18.63% postgres postgres [.] GetSnapshotData
> 11.98% postgres postgres [.] _bt_compare
> 10.20% postgres postgres [.] PinBuffer
> 8.63% postgres postgres [.] LWLockAttemptLock
> 6.50% postgres postgres [.] LWLockRelease
>
>
> [Perf analysis with Patch]
>
>
> Server configuration:
> ./postgres -c shared_buffers=8GB -N 300 -c min_wal_size=15GB -c
> max_wal_size=20GB -c checkpoint_timeout=900 -c
> maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 -c
> wal_buffers=256MB &
>
> pgbench configuration:
> scale_factor = 300
> ./pgbench -c $threads -j $threads -T $time_for_reading -M prepared -S postgres
>
> The machine has 64 cores with this patch I can see server starts
> improvement after 64 clients. I have tested up to 256 clients. Which
> shows performance improvement nearly max 39% [1]. This is the best
> case for the patch where once computed snapshotData is reused further.
>
> The worst case seems to be small, quick write transactions with which
> frequently invalidate the cached SnapshotData before it is reused by
> any next GetSnapshotData call. As of now, I tested simple update
> tests: pgbench -M Prepared -N on the same machine with the above
> server configuration. I do not see much change in TPS numbers.
>
> All TPS are median of 3 runs
> Clients TPS-With Patch 05 TPS-Base %Diff
> 1 752.461117 755.186777 -0.3%
> 64 32171.296537 31202.153576 +3.1%
> 128 41059.660769 40061.929658 +2.49%
>
> I will do some profiling and find out why this case is not costing us
> some performance due to caching overhead.
>
>
> []
>
> On Thu, Aug 3, 2017 at 2:16 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> Hi,
>>
>> I think this patch should have a "cover letter" explaining what it's
>> trying to achieve, how it does so and why it's safe/correct. I think
>> it'd also be good to try to show some of the worst cases of this patch
>> (i.e. where the caching only adds overhead, but never gets used), and
>> some of the best cases (where the cache gets used quite often, and
>> reduces contention significantly).
>>
>> I wonder if there's a way to avoid copying the snapshot into the cache
>> in situations where we're barely ever going to need it. But right now
>> the only way I can think of is to look at the length of the
>> ProcArrayLock wait queue and count the exclusive locks therein - which'd
>> be expensive and intrusive...
>>
>>
>> On 2017-08-02 15:56:21 +0530, Mithun Cy wrote:
>>> diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
>>> index a7e8cf2..4d7debe 100644
>>> --- a/src/backend/storage/ipc/procarray.c
>>> +++ b/src/backend/storage/ipc/procarray.c
>>> @@ -366,11 +366,13 @@ ProcArrayRemove(PGPROC *proc, TransactionId latestXid)
>>> (arrayP->numProcs - index - 1) * sizeof(int));
>>> arrayP->pgprocnos[arrayP->numProcs - 1] = -1; /* for debugging */
>>> arrayP->numProcs--;
>>> + ProcGlobal->is_snapshot_valid = false;
>>> LWLockRelease(ProcArrayLock);
>>> return;
>>> }
>>> }
>>>
>>> + ProcGlobal->is_snapshot_valid = false;
>>> /* Oops */
>>> LWLockRelease(ProcArrayLock);
>>
>> Why do we need to do anything here? And if we need to, why not in
> ;> ProcArrayAdd?
>
> In ProcArrayRemove I can see ShmemVariableCache->latestCompletedXid is
> set to latestXid which is going to change xmax in GetSnapshotData,
> hence invalidated the snapshot there. ProcArrayAdd do not seem to be
> affecting snapshotData. I have modified now to set
> cached_snapshot_valid to false just below the statement
> ShmemVariableCache->latestCompletedXid = latestXid only.
> +if (TransactionIdIsValid(latestXid))
> +{
> + Assert(TransactionIdIsValid(allPgXact[proc->pgprocno].xid));
> + /* Advance global latestCompletedXid while holding the lock */
> + if (TransactionIdPrecedes(ShmemVariableCache->latestCompletedXid,
> + latestXid))
> + ShmemVariableCache->latestCompletedXid = latestXid;
>
>
>
>>> @@ -1552,6 +1557,8 @@ GetSnapshotData(Snapshot snapshot)
>>> errmsg("out of memory")));
>>> }
>>>
>>> + snapshot->takenDuringRecovery = RecoveryInProgress();
>>> +
>>> /*
>>> * It is sufficient to get shared lock on ProcArrayLock, even if we are
>>> * going to set MyPgXact->xmin.
>>> @@ -1566,100 +1573,200 @@ GetSnapshotData(Snapshot snapshot)
>>> /* initialize xmin calculation with xmax */
>>> globalxmin = xmin = xmax;
>>>
>>> - snapshot->takenDuringRecovery = RecoveryInProgress();
>>> -
>>
>> This movement of code seems fairly unrelated?
>
> -- Yes not related, It was part of early POC patch so retained it as
> it is. Now reverted those changes moved them back under the
> ProcArrayLock
>
>>> if (!snapshot->takenDuringRecovery)
>>> {
>>
>> Do we really need to restrict this to !recovery snapshots? It'd be nicer
>> if we could generalize it - I don't immediately see anything !recovery
>> specific in the logic here.
>
> Agree I will add one more patch on this to include standby(recovery
> state) case also to unify all the cases where we can cache the
> snapshot. Before let me see
>
>
>>> - int *pgprocnos = arrayP->pgprocnos;
>>> - int numProcs;
>>> + if (ProcGlobal->is_snapshot_valid)
>>> + {
>>> + Snapshot csnap = &ProcGlobal->cached_snapshot;
>>> + TransactionId *saved_xip;
>>> + TransactionId *saved_subxip;
>>>
>>> - /*
>>> - * Spin over procArray checking xid, xmin, and subxids. The goal is
>>> - * to gather all active xids, find the lowest xmin, and try to record
>>> - * subxids.
>>> - */
>>> - numProcs = arrayP->numProcs;
>>> - for (index = 0; index < numProcs; index++)
>>> + saved_xip = snapshot->xip;
>>> + saved_subxip = snapshot->subxip;
>>> +
>>> + memcpy(snapshot, csnap, sizeof(SnapshotData));
>>> +
>>> + snapshot->xip = saved_xip;
>>> + snapshot->subxip = saved_subxip;
>>> +
>>> + memcpy(snapshot->xip, csnap->xip,
>>> + sizeof(TransactionId) * csnap->xcnt);
>>> + memcpy(snapshot->subxip, csnap->subxip,
>>> + sizeof(TransactionId) * csnap->subxcnt);
>>> + globalxmin = ProcGlobal->cached_snapshot_globalxmin;
>>> + xmin = csnap->xmin;
>>> +
>>> + count = csnap->xcnt;
>>> + subcount = csnap->subxcnt;
>>> + suboverflowed = csnap->suboverflowed;
>>> +
>>> + Assert(TransactionIdIsValid(globalxmin));
>>> + Assert(TransactionIdIsValid(xmin));
>>> + }
>>
>> Can we move this to a separate function? In fact, could you check how
>> much effort it'd be to split cached, !recovery, recovery cases into
>> three static inline helper functions? This is starting to be hard to
>> read, the indentation added in this patch doesn't help... Consider doing
>> recovery, !recovery cases in a preliminary separate patch.
>
> In this patch, I have moved the code 2 functions one to cache the
> snapshot data another to get the snapshot data from the cache. In the
> next patch, I will try to unify these things with recovery (standby)
> case. For recovery case, we are copying the xids from a simple xid
> array KnownAssignedXids[], I am not completely sure whether caching
> them bring similar performance benefits as above so I will also try to
> get a perf report for same.
>
>>
>>> + * Let only one of the caller cache the computed snapshot, and
>>> + * others can continue as before.
>>> */
>>> - if (!suboverflowed)
>>> + if (!ProcGlobal->is_snapshot_valid &&
>>> + LWLockConditionalAcquire(&ProcGlobal->CachedSnapshotLock,
>>> + LW_EXCLUSIVE))
>>> {
>>
>> I'd also move this to a helper function (the bit inside the lock).
>
> Fixed as suggested.
>
>>> +
>>> + /*
>>> + * The memory barrier has be to be placed here to ensure that
>>> + * is_snapshot_valid is set only after snapshot is cached.
>>> + */
>>> + pg_write_barrier();
>>> + ProcGlobal->is_snapshot_valid = true;
>>> + LWLockRelease(&ProcGlobal->CachedSnapshotLock);
>>
>> LWLockRelease() is a full memory barrier, so this shouldn't be required.
>
> Okay, Sorry for my understanding, Do you want me to set
> ProcGlobal->is_snapshot_valid after
> LWLockRelease(&ProcGlobal->CachedSnapshotLock)?
>
>>
>>> /*
>>> diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
>>> index 7dbaa81..7d06904 100644
>>> --- a/src/include/storage/proc.h
>>> +++ b/src/include/storage/proc.h
>>> @@ -16,6 +16,7 @@
>>>
>>> #include "access/xlogdefs.h"
>>> #include "lib/ilist.h"
>>> +#include "utils/snapshot.h"
>>> #include "storage/latch.h"
>>> #include "storage/lock.h"
>>> #include "storage/pg_sema.h"
>>> @@ -253,6 +254,22 @@ typedef struct PROC_HDR
>>> int startupProcPid;
>>> /* Buffer id of the buffer that Startup process waits for pin on, or -1 */
>>> int startupBufferPinWaitBufId;
>>> +
>>> + /*
>>> + * In GetSnapshotData we can reuse the previously snapshot computed if no
>>> + * new transaction has committed or rolledback. Thus saving good amount of
>>> + * computation cycle under GetSnapshotData where we need to iterate
>>> + * through procArray every time we try to get the current snapshot. Below
>>> + * members help us to save the previously computed snapshot in global
>>> + * shared memory and any process which want to get current snapshot can
>>> + * directly copy from them if it is still valid.
>>> + */
>>> + SnapshotData cached_snapshot; /* Previously saved snapshot */
>>> + volatile bool is_snapshot_valid; /* is above snapshot valid */
>>> + TransactionId cached_snapshot_globalxmin; /* globalxmin when above
>>
>>
>> I'd rename is_snapshot_valid to 'cached_snapshot_valid' or such, it's
>> not clear from the name what it refers to.
>
> -- Renamed to cached_snapshot_valid.
>
>>
>>> + LWLock CachedSnapshotLock; /* A try lock to make sure only one
>>> + * process write to cached_snapshot */
>>> } PROC_HDR;
>>>
>>
>> Hm, I'd not add the lock to the struct, we normally don't do that for
>> the "in core" lwlocks that don't exist in a "configurable" number like
>> the buffer locks and such.
>>
>
> -- I am not really sure about this. Can you please help what exactly I
> should be doing here to get this corrected? Is that I have to add it
> to lwlocknames.txt as a new LWLock?
>
> --
> Thanks and Regards
> Mithun C Y
> EnterpriseDB: http://www.enterprisedb.com

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-08-29 09:04:40
Message-ID: CAD__Oui=Q++3ER0pU2HanJqE7nGtOhWxypxxJQVsJqNS91LG-g@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Aug 3, 2017 at 2:16 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>I think this patch should have a "cover letter" explaining what it's
> trying to achieve, how it does so and why it's safe/correct.

Cache the SnapshotData for reuse:
===========================
In one of our perf analysis using perf tool it showed GetSnapshotData
takes a very high percentage of CPU cycles on readonly workload when
there is very high number of concurrent connections >= 64.

Machine : cthulhu 8 node machine.
----------------------------------------------
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 8
NUMA node(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 47
Model name: Intel(R) Xeon(R) CPU E7- 8830 @ 2.13GHz
Stepping: 2
CPU MHz: 2128.000
BogoMIPS: 4266.63
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 24576K
NUMA node0 CPU(s): 0,65-71,96-103
NUMA node1 CPU(s): 72-79,104-111
NUMA node2 CPU(s): 80-87,112-119
NUMA node3 CPU(s): 88-95,120-127
NUMA node4 CPU(s): 1-8,33-40
NUMA node5 CPU(s): 9-16,41-48
NUMA node6 CPU(s): 17-24,49-56
NUMA node7 CPU(s): 25-32,57-64

On further analysis, it appears this mainly accounts to cache-misses
happening while iterating through large proc array to compute the
SnapshotData. Also, it seems mainly on read-only (read heavy) workload
SnapshotData mostly remains same as no new(rarely a) transaction
commits or rollbacks. Taking advantage of this we can save the
previously computed SanspshotData in shared memory and in
GetSnapshotData we can use the saved SnapshotData instead of computing
same when it is still valid. A saved SnapsotData is valid until no
open transaction end after it.

[Perf cache-misses analysis of Base for 128 pgbench readonly clients]
======================================================
Samples: 421K of event 'cache-misses', Event count (approx.): 160069274
Overhead Command Shared Object Symbol
18.63% postgres postgres [.] GetSnapshotData
11.98% postgres postgres [.] _bt_compare
10.20% postgres postgres [.] PinBuffer
8.63% postgres postgres [.] LWLockAttemptLock
6.50% postgres postgres [.] LWLockRelease

[Perf cache-misses analysis with Patch for 128 pgbench readonly clients]
========================================================
Samples: 357K of event 'cache-misses', Event count (approx.): 102380622
Overhead Command Shared Object Symbol
0.27% postgres postgres [.] GetSnapshotData

with the patch, it appears cache-misses events has reduced significantly.

Test Setting:
=========
Server configuration:
./postgres -c shared_buffers=8GB -N 300 -c min_wal_size=15GB -c
max_wal_size=20GB -c checkpoint_timeout=900 -c
maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 -c
wal_buffers=256MB &

pgbench configuration:
scale_factor = 300
./pgbench -c $threads -j $threads -T $time_for_reading -M prepared -S postgres

The machine has 64 cores with this patch I can see server starts
improvement after 64 clients. I have tested up to 256 clients. Which
shows performance improvement nearly max 39% [1]. This is the best
case for the patch where once computed snapshotData is reused further.

The worst case seems to be small, quick write transactions which
frequently invalidate the cached SnapshotData before it is reused by
any next GetSnapshotData call. As of now, I tested simple update
tests: pgbench -M Prepared -N on the same machine with the above
server configuration. I do not see much change in TPS numbers.

All TPS are median of 3 runs
Clients TPS-With Patch 05 TPS-Base %Diff
1 752.461117 755.186777 -0.3%
64 32171.296537 31202.153576 +3.1%
128 41059.660769 40061.929658 +2.49%

I will do some profiling and find out why this case is not costing us
some performance due to caching overhead.

>> diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
>> index a7e8cf2..4d7debe 100644
>> --- a/src/backend/storage/ipc/procarray.c
>> +++ b/src/backend/storage/ipc/procarray.c
>> @@ -366,11 +366,13 @@ ProcArrayRemove(PGPROC *proc, TransactionId latestXid)
>> (arrayP->numProcs - index - 1) * sizeof(int));
>> arrayP->pgprocnos[arrayP->numProcs - 1] = -1; /* for debugging */
>> arrayP->numProcs--;
>> + ProcGlobal->is_snapshot_valid = false;
>> LWLockRelease(ProcArrayLock);
>> return;
>> }
>> }
>>
>> + ProcGlobal->is_snapshot_valid = false;
>> /* Oops */
>> LWLockRelease(ProcArrayLock);
>
> Why do we need to do anything here? And if we need to, why not in
> ProcArrayAdd?

-- In ProcArrayRemove I can see ShmemVariableCache->latestCompletedXid
is set to latestXid which is going to change xmax in GetSnapshotData,
hence invalidated the snapshot there. ProcArrayAdd do not seem to be
affecting snapshotData. I have modified now to set
cached_snapshot_valid to false just below the statement
ShmemVariableCache->latestCompletedXid = latestXid only.
+if (TransactionIdIsValid(latestXid))
+{
+ Assert(TransactionIdIsValid(allPgXact[proc->pgprocno].xid));
+ /* Advance global latestCompletedXid while holding the lock */
+ if (TransactionIdPrecedes(ShmemVariableCache->latestCompletedXid,
+ latestXid))
+ ShmemVariableCache->latestCompletedXid = latestXid;

>> @@ -1552,6 +1557,8 @@ GetSnapshotData(Snapshot snapshot)
>> globalxmin = xmin = xmax;
>>
>> - snapshot->takenDuringRecovery = RecoveryInProgress();
>> -
>
> This movement of code seems fairly unrelated?

-- Sorry not related, It was part of early POC patch so retained it as
it is. Now reverted those changes moved them back under the
ProcArrayLock

>> if (!snapshot->takenDuringRecovery)
>> {
>
> Do we really need to restrict this to !recovery snapshots? It'd be nicer
> if we could generalize it - I don't immediately see anything !recovery
> specific in the logic here.

-- Agree I will add one more patch on this to include standby(recovery
state) case also to unify all the cases where we can cache the
snapshot. In this patch, I have moved the code 2 functions one to
cache the snapshot data another to get the snapshot data from the
cache. In the next patch, I will try to unify these things with
recovery (standby) case. For recovery case, we are copying the xids
from a simple xid array KnownAssignedXids[], I am not completely sure
whether caching them bring similar performance benefits as above so I
will also try to get a perf report for same.

>
>> + * Let only one of the caller cache the computed snapshot, and
>> + * others can continue as before.
>> */
>> - if (!suboverflowed)
>> + if (!ProcGlobal->is_snapshot_valid &&
>> + LWLockConditionalAcquire(&ProcGlobal->CachedSnapshotLock,
>> + LW_EXCLUSIVE))
>> {
>
> I'd also move this to a helper function (the bit inside the lock).

-- Fixed as suggested.

>> +
>> + /*
>> + * The memory barrier has be to be placed here to ensure that
>> + * is_snapshot_valid is set only after snapshot is cached.
>> + */
>> + pg_write_barrier();
>> + ProcGlobal->is_snapshot_valid = true;
>> + LWLockRelease(&ProcGlobal->CachedSnapshotLock);
>
> LWLockRelease() is a full memory barrier, so this shouldn't be required.

-- Okay, Sorry for my understanding, do you want me to set
ProcGlobal->is_snapshot_valid after
LWLockRelease(&ProcGlobal->CachedSnapshotLock)?

>> + SnapshotData cached_snapshot; /* Previously saved snapshot */
>> + volatile bool is_snapshot_valid; /* is above snapshot valid */
>> + TransactionId cached_snapshot_globalxmin; /* globalxmin when above
>
>
> I'd rename is_snapshot_valid to 'cached_snapshot_valid' or such, it's
> not clear from the name what it refers to.

-- Fixed renamed as suggested.

>
>> + LWLock CachedSnapshotLock; /* A try lock to make sure only one
>> + * process write to cached_snapshot */
>> } PROC_HDR;
>>
>
> Hm, I'd not add the lock to the struct, we normally don't do that for
> the "in core" lwlocks that don't exist in a "configurable" number like
> the buffer locks and such.
>

-- I am not really sure about this. Can you please help what exactly I
should be doing here to get this corrected? Is that I have to add it
to lwlocknames.txt as a new LWLock?

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

Attachment Content-Type Size
cache_the_snapshot_performance.ods application/vnd.oasis.opendocument.spreadsheet 17.8 KB
Cache_data_in_GetSnapshotData_POC_05.patch application/octet-stream 18.5 KB

From: Jesper Pedersen <jesper(dot)pedersen(at)redhat(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-09-13 13:54:53
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 08/29/2017 05:04 AM, Mithun Cy wrote:
> Test Setting:
> =========
> Server configuration:
> ./postgres -c shared_buffers=8GB -N 300 -c min_wal_size=15GB -c
> max_wal_size=20GB -c checkpoint_timeout=900 -c
> maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 -c
> wal_buffers=256MB &
>
> pgbench configuration:
> scale_factor = 300
> ./pgbench -c $threads -j $threads -T $time_for_reading -M prepared -S postgres
>
> The machine has 64 cores with this patch I can see server starts
> improvement after 64 clients. I have tested up to 256 clients. Which
> shows performance improvement nearly max 39% [1]. This is the best
> case for the patch where once computed snapshotData is reused further.
>
> The worst case seems to be small, quick write transactions which
> frequently invalidate the cached SnapshotData before it is reused by
> any next GetSnapshotData call. As of now, I tested simple update
> tests: pgbench -M Prepared -N on the same machine with the above
> server configuration. I do not see much change in TPS numbers.
>
> All TPS are median of 3 runs
> Clients TPS-With Patch 05 TPS-Base %Diff
> 1 752.461117 755.186777 -0.3%
> 64 32171.296537 31202.153576 +3.1%
> 128 41059.660769 40061.929658 +2.49%
>
> I will do some profiling and find out why this case is not costing us
> some performance due to caching overhead.
>

I have done a run with this patch on a 2S/28C/56T/256Gb w/ 2 x RAID10
SSD machine.

Both for -M prepared, and -M prepared -S I'm not seeing any improvements
(1 to 375 clients); e.g. +-1%.

Although the -M prepared -S case should improve, I'm not sure that the
extra overhead in the -M prepared case is worth the added code complexity.

Best regards,
Jesper


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Jesper Pedersen <jesper(dot)pedersen(at)redhat(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-09-14 08:53:40
Message-ID: CAD__OujRZEjE5y3vfmmZmSSr3oYGZSHRxwDwF7kyhBHB2BpW_g@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 13, 2017 at 7:24 PM, Jesper Pedersen
<jesper(dot)pedersen(at)redhat(dot)com> wrote:
> I have done a run with this patch on a 2S/28C/56T/256Gb w/ 2 x RAID10 SSD
> machine.
>
> Both for -M prepared, and -M prepared -S I'm not seeing any improvements (1
> to 375 clients); e.g. +-1%.

My test was done on an 8 socket NUMA intel machine, where I could
clearly see improvements as posted before.
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 8
NUMA node(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 47
Model name: Intel(R) Xeon(R) CPU E7- 8830 @ 2.13GHz
Stepping: 2
CPU MHz: 1064.000
BogoMIPS: 4266.62
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 24576K
NUMA node0 CPU(s): 0,65-71,96-103
NUMA node1 CPU(s): 72-79,104-111
NUMA node2 CPU(s): 80-87,112-119
NUMA node3 CPU(s): 88-95,120-127
NUMA node4 CPU(s): 1-8,33-40
NUMA node5 CPU(s): 9-16,41-48
NUMA node6 CPU(s): 17-24,49-56
NUMA node7 CPU(s): 25-32,57-64

Let me recheck if the improvement is due to NUMA or cache sizes.
Currently above machine is not available for me. It will take 2 more
days for same.

> Although the -M prepared -S case should improve, I'm not sure that the extra
> overhead in the -M prepared case is worth the added code complexity.

Each tpcb like (-M prepared) transaction in pgbench have 3 updates 1
insert and 1 select statements. There will be more GetSnapshotData
calls than the end of the transaction (cached snapshot invalidation).
So I think whatever we cache has a higher chance of getting used
before it is invalidated in -M prepared. But on worst cases where we
have quick write transaction which invalidates cached snapshot before
it is reused becomes an overhead.
As Andres has suggested previously I need a mechanism to identify and
avoid caching for such scenarios. I do not have a right solution for
this at present but one thing we can try is just before caching if we
see an exclusive request in wait queue of ProcArrayLock we can avoid
caching.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-09-20 18:15:51
Message-ID: CA+TgmoZx2Q-8Xnz9+0XvdHp9s3yq+_Yy1Ka5ERPXEcYuAeCQGg@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Aug 29, 2017 at 1:57 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com> wrote:
> All TPS are median of 3 runs
> Clients TPS-With Patch 05 TPS-Base %Diff
> 1 752.461117 755.186777 -0.3%
> 64 32171.296537 31202.153576 +3.1%
> 128 41059.660769 40061.929658 +2.49%
>
> I will do some profiling and find out why this case is not costing us
> some performance due to caching overhead.

So, this shows only a 2.49% improvement at 128 clients but in the
earlier message you reported a 39% speedup at 256 clients. Is that
really correct? There's basically no improvement up to threads = 2 x
CPU cores, and then after that it starts to improve rapidly? What
happens at intermediate points, like 160, 192, 224 clients?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-09-21 01:15:50
Message-ID: CAB7nPqRDQVo4o7dpJrOaNbb+gVh1xDsdZpJjoDqD_tV2dKdE7Q@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Sep 21, 2017 at 3:15 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Tue, Aug 29, 2017 at 1:57 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com> wrote:
>> All TPS are median of 3 runs
>> Clients TPS-With Patch 05 TPS-Base %Diff
>> 1 752.461117 755.186777 -0.3%
>> 64 32171.296537 31202.153576 +3.1%
>> 128 41059.660769 40061.929658 +2.49%
>>
>> I will do some profiling and find out why this case is not costing us
>> some performance due to caching overhead.
>
> So, this shows only a 2.49% improvement at 128 clients but in the
> earlier message you reported a 39% speedup at 256 clients. Is that
> really correct? There's basically no improvement up to threads = 2 x
> CPU cores, and then after that it starts to improve rapidly? What
> happens at intermediate points, like 160, 192, 224 clients?

It could be interesting to do multiple runs as well, and eliminate
runs with upper and lower bound results while taking an average of the
others. 2/3% is within the noise band of pgbench.
--
Michael


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-09-21 01:46:54
Message-ID: CAD__Oui+JM2pPCLV9Y1L4PuL1aikyMyNZqKagWqb6rA6Y61DRQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 20, 2017 at 11:45 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Tue, Aug 29, 2017 at 1:57 AM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
wrote:
>> All TPS are median of 3 runs
>> Clients TPS-With Patch 05 TPS-Base %Diff
>> 1 752.461117 755.186777 -0.3%
>> 64 32171.296537 31202.153576 +3.1%
>> 128 41059.660769 40061.929658 +2.49%
>>
>> I will do some profiling and find out why this case is not costing us
>> some performance due to caching overhead.
>
> So, this shows only a 2.49% improvement at 128 clients but in the
> earlier message you reported a 39% speedup at 256 clients. Is that
> really correct? There's basically no improvement up to threads = 2 x
> CPU cores, and then after that it starts to improve rapidly? What
> happens at intermediate points, like 160, 192, 224 clients?

I think there is some confusion above results is for pgbench simple update
(-N) tests where cached snapshot gets invalidated, I have run this to check
if there is any regression due to frequent cache invalidation and did not
find any. The target test for the above patch is read-only case [1] where
we can see the performance improvement as high as 39% (@256 threads) on
Cthulhu(a 8 socket numa machine with 64 CPU cores). At 64 threads ( = CPU
cores) we have 5% improvement and at clients 128 = (2 * CPU cores =
hyperthreads) we have 17% improvement.

Clients BASE CODE With patch %Imp

64 452475.929144 476195.952736 5.2422730281

128 556207.727932 653256.029012 17.4482115595

256 494336.282804 691614.000463 39.9075941867

[1] cache_the_snapshot_performance.ods
<https://www.postgresql.org/message-id/CAD__Oui=Q++3ER0pU2HanJqE7nGtOhWxypxxJQVsJqNS91LG-g@mail.gmail.com>
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-09-21 01:55:16
Message-ID: CA+TgmoZ1cG4wLzAyt9+0ktjyVkfu8EPE7Ae2AXX44X-EYn4=Ww@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 20, 2017 at 9:46 PM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com> wrote:
> I think there is some confusion above results is for pgbench simple update
> (-N) tests where cached snapshot gets invalidated, I have run this to check
> if there is any regression due to frequent cache invalidation and did not
> find any. The target test for the above patch is read-only case [1] where we
> can see the performance improvement as high as 39% (@256 threads) on
> Cthulhu(a 8 socket numa machine with 64 CPU cores). At 64 threads ( = CPU
> cores) we have 5% improvement and at clients 128 = (2 * CPU cores =
> hyperthreads) we have 17% improvement.
>
> Clients BASE CODE With patch %Imp
>
> 64 452475.929144 476195.952736 5.2422730281
>
> 128 556207.727932 653256.029012 17.4482115595
>
> 256 494336.282804 691614.000463 39.9075941867

Oh, you're right. I was confused.

But now I'm confused about something else: if you're seeing a clear
gain at higher-client counts, why is Jesper Pederson not seeing the
same thing? Do you have results for a 2-socket machine? Maybe this
only helps with >2 sockets.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-09-21 02:04:59
Message-ID: CAD__OuhXgZHPBGagySuHOffTb9oWtmhAgdRZCp7pbMj-iA+TfA@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Sep 21, 2017 at 7:25 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Wed, Sep 20, 2017 at 9:46 PM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com> wrote:
>> I think there is some confusion above results is for pgbench simple update
>> (-N) tests where cached snapshot gets invalidated, I have run this to check
>> if there is any regression due to frequent cache invalidation and did not
>> find any. The target test for the above patch is read-only case [1] where we
>> can see the performance improvement as high as 39% (@256 threads) on
>> Cthulhu(a 8 socket numa machine with 64 CPU cores). At 64 threads ( = CPU
>> cores) we have 5% improvement and at clients 128 = (2 * CPU cores =
>> hyperthreads) we have 17% improvement.
>>
>> Clients BASE CODE With patch %Imp
>>
>> 64 452475.929144 476195.952736 5.2422730281
>>
>> 128 556207.727932 653256.029012 17.4482115595
>>
>> 256 494336.282804 691614.000463 39.9075941867
>
> Oh, you're right. I was confused.
>
> But now I'm confused about something else: if you're seeing a clear
> gain at higher-client counts, why is Jesper Pederson not seeing the
> same thing? Do you have results for a 2-socket machine? Maybe this
> only helps with >2 sockets.

My current tests show on scylla (2 socket machine with 28 CPU core) I
do not see any improvement at all as similar to Jesper. But same tests
on power2 (4 sockets) and Cthulhu(8 socket machine) we can see
improvements.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-09-21 02:22:00
Message-ID: CA+TgmoYuuhFPifAPHBCFz95hANeMHVoRKtpon0sdAOYTJWiaiQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 20, 2017 at 10:04 PM, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com> wrote:
> My current tests show on scylla (2 socket machine with 28 CPU core) I
> do not see any improvement at all as similar to Jesper. But same tests
> on power2 (4 sockets) and Cthulhu(8 socket machine) we can see
> improvements.

OK, makes sense. So for whatever reason, this appears to be something
that will only help users with >2 sockets. But that's not necessarily
a bad thing.

The small regression at 1 client is a bit concerning though.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-hackers(at)postgresql(dot)org,Robert Haas <robertmhaas(at)gmail(dot)com>,Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>,pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-09-21 02:32:36
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On September 20, 2017 7:22:00 PM PDT, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>On Wed, Sep 20, 2017 at 10:04 PM, Mithun Cy
><mithun(dot)cy(at)enterprisedb(dot)com> wrote:
>> My current tests show on scylla (2 socket machine with 28 CPU core) I
>> do not see any improvement at all as similar to Jesper. But same
>tests
>> on power2 (4 sockets) and Cthulhu(8 socket machine) we can see
>> improvements.
>
>OK, makes sense. So for whatever reason, this appears to be something
>that will only help users with >2 sockets. But that's not necessarily
>a bad thing.

Wonder how much of that is microarchitecture, how much number of sockets.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Subject: Re: POC: Cache data in GetSnapshotData()
Date: 2017-09-21 03:35:02
Message-ID: CA+TgmoabdDiT5t5Zo64q4fqTV6bn0=Rzrz-77KP1uYeWNfz7-Q@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Sep 20, 2017 at 10:32 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Wonder how much of that is microarchitecture, how much number of sockets.

I don't know. How would we tell? power2 is a 4-socket POWER box, and
cthulhu is an 8-socket x64 box, so it's not like they are the same
sort of thing.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company