You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
(15) |
2
(10) |
3
(2) |
4
(6) |
5
|
6
(1) |
7
(23) |
8
|
9
|
10
|
11
|
12
(2) |
13
|
14
|
15
|
16
(2) |
17
(2) |
18
|
19
|
20
(1) |
21
(2) |
22
(3) |
23
(2) |
24
(5) |
25
(2) |
26
(3) |
27
(4) |
28
(6) |
29
(9) |
30
(3) |
31
|
From: Koichi S. <koi...@gm...> - 2014-05-25 18:04:42
|
I see. Your have good usecase for read-only transactions. Because of the nature of log shipping and sharing/clustering, it is not simple to provide read-only transaction in XC. Two essential reasons: 1. Delay in WAL playback in each slave may be different. It makes providing consistent database view extremely difficult. 2. At present, slave calculates snapshot of the transaction from the WAL. Current code does not allow missing XIDs. There will be memory leak and crash by OOM if there's many missing XIDs in the WAL stream. In XC, it is disabled and the database view may be inconsistent. Please note that this does not affect recovery and promotion. Read only scalability is obviously a candidate of our TODO list. Based on the discussion in PGCon cluster summit, we will open-up our roadmap discussion this week and ask for input of feature/performance/quality discussion at our general/developer mailing list. I hope you can post your usecase and requirement to this discussion. Best Regards; --- Koichi Suzuki 2014-05-25 5:23 GMT-07:00 ZhangJulian <jul...@ou...>: > Hi Koichi, > > Thanks for the explaination. > > We have a system which has some OLTP applications and some REPORT > applications, and the REPORT system can bear some inconsistency. We do not > want the REPORT system influencing the statibility of the OLTP system, so > read/write separation is applicable in this scenario. > > From your advices, I feel I should limit the use cases to a smaller > scenario, for example, even the GUC is enabled, only the SELECT statement > under the autocommit=true could be routed to the slaves. > > From the other mail thread, the community has planned some other approachs > to achive the similar goal. Because our team has no much experience on the > development, we plan to train ourselves by this task even it will not be > adopted by the community. > > We may ask help from you if we have some questions, thanks in advance! > > Thanks > Julian > >> Date: Fri, 23 May 2014 08:52:28 -0400 > >> Subject: Re: [Postgres-xc-general] Do you think the new feature is >> meaningful? - Read/Write Separation >> From: koi...@gm... >> To: jul...@ou... >> CC: pos...@li... >> >> Hello; >> >> Find my reply inline. >> >> Thank you; >> --- >> Koichi Suzuki >> >> >> 2014-05-22 23:49 GMT-04:00 ZhangJulian <jul...@ou...>: >> > Hi Koichi, >> > >> > Thanks for your comments! >> > >> > 1. pgxc_node issue. >> > I feel the pgxc_node in data node have no use currently, right? >> > In current codebase, if a coordinator slave or a data node slave is >> > promoted, ALTER NODE statement must be executed in all the coordinators >> > since the pgxc_node table is a local table in each node. >> > Assume the feature is applied, ALTER NODE/CREATE NODE syntax also will >> > be >> > updated to update the master and slave together. Once a coordinator >> > slave or >> > a data node slave is prompted, the information in other coordinators and >> > the >> > prompted coordinator could be updated as the previous behavior. >> >> I understand your goal and it sounds attractive to have such >> master-slave info inside the database. Maybe we need better idea >> which survives slave promotion. >> >> > >> > 2. the data between the master and the slave may not be consistency >> > every >> > time. >> > It should be a common issue on PostgreSQL, and other non-cluster >> > database >> > platform. There are many users who use the master-slave infrastructure >> > to >> > achive the read/write separation. If the user open the feature, they >> > should >> > know the risk. >> >> The use case should be limited. The transaction has to be read only. >> We cannot transfer statement-by-statement basis. Even with >> transaction-basis transfer, we may be suffered from such >> inconsistency. I'm afraid this may not be understood widely. >> Given this, anyway, synchronizing WAL playback in slaves is essential >> issue to provide read transaction on slaves. This was discussed in >> the cluster summit at PGCon this Tuesday. >> >> > >> > 3. the GXID issue. >> > It is too complex to me, I can not understand it thoroughly, :) But if >> > the >> > user can bear the data is not consistency in a short time, it will be >> > not a >> > issue, right? >> >> GXID issue is a solution to provide "atomic visibility" among read and >> write distributed transactions. It is quite new and may need another >> material to understand. Let me prepare a material to describe why it >> is needed and what issues this solves. >> >> This kind of thing is essential to provide consistent database view in >> the cluster. >> >> Please allow me a bit to provide background information on this. >> >> > >> > Thanks >> > Julian >> > >> >> Date: Thu, 22 May 2014 09:21:28 -0400 >> >> Subject: Re: [Postgres-xc-general] Do you think the new feature is >> >> meaningful? - Read/Write Separation >> >> From: koi...@gm... >> >> To: jul...@ou... >> >> CC: pos...@li... >> > >> >> >> >> Hello; >> >> >> >> Thanks a lot for the idea. Please find my comments inline. >> >> >> >> Hope you consider them and more forward to make your goal more >> >> feasible? >> >> >> >> Regards; >> >> --- >> >> Koichi Suzuki >> >> >> >> >> >> 2014-05-22 4:19 GMT-04:00 ZhangJulian <jul...@ou...>: >> >> > Hi All, >> >> > >> >> > I plan to implement it as the below idea. >> >> > 1. add a new GUC to the coordinator configuration, which control the >> >> > READ/WRITE Separation feature is ON/OFF. >> >> > 2. extend the catalog table pgxc_node by adding new columns: >> >> > slave1_host, >> >> > slave1_port, slave1_id, slave2_host, slave2_port, slave2_id. Suppose >> >> > at >> >> > most >> >> > two slaves are supported. >> >> >> >> I don't think this is a good idea. If we have these info in the >> >> catalog, this will all goes to the slave by WAL shipping and will be >> >> used when a slave is promoted. >> >> >> >> This information is not valid when the master is gone and one of the >> >> slaves is promoted. >> >> >> >> > 3. a read only transaction or the front read only part of a >> >> > transaction >> >> > will >> >> > be routed to the slave node to execute. >> >> >> >> In current WAL shipping, we have to expect some difference when a >> >> transaction or statement update is visible to the slave. At least, >> >> even with >> >> synchronized replication, there's slight delay after the WAL record is >> >> received and is replayed to be available to hot standby. There's >> >> even a chance that such update is visible before it is visible at the >> >> master. >> >> >> >> Therefore, usecase of current hot standby should allow such >> >> differences. I don't think your example allow such WAL shipping >> >> replication characteristics. >> >> >> >> Moreover, current hot standby implementation assumes the slave will >> >> receive every XID in updates. It does not assume there could be >> >> missing XIDs and this assumption is used to generate snapshot to >> >> enforce update visibility. >> >> >> >> In XC, because of GXID nature, some GXID may be missing at some slave. >> >> >> >> At present, because we didn't have sufficient resources, snapshot >> >> generation is disabled. >> >> >> >> In addition to this, local snapshot may not work. We need global XID >> >> (GXID) to get consistent result. >> >> >> >> By such reasons, it is not simple to provide consistent database view >> >> from slaves. >> >> >> >> I discussed this in PGCon cluster summit this Tuesday and I'm afraid >> >> this need much more analysis, research and design. >> >> >> >> > >> >> > For example, >> >> > begin; >> >> > select ....; ==>go to slave node >> >> > select ....; ==>go to slave node >> >> > insert ....; ==>go to master node >> >> > select ....; ==>go to master node, since it may visit the row >> >> > inserted >> >> > by >> >> > the previous insert statement. >> >> > end; >> >> > >> >> > By this, in a cluster, >> >> > some coordinator can be configured to support the OLTP system, the >> >> > query >> >> > will be routed to the master data nodes; >> >> > others coordinators can be configured to support the report system, >> >> > the >> >> > query will be routed to the slave data nodes; >> >> > the different wordloads will be applied to different coordinators and >> >> > data >> >> > nodes, then they can be isolated. >> >> > >> >> > Do you think if it is valuable? Do you have some advices? >> >> > >> >> > Thanks >> >> > Julian >> >> > >> >> > >> >> > >> >> > ------------------------------------------------------------------------------ >> >> > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For >> >> > FREE >> >> > Instantly run your Selenium tests across 300+ browser/OS combos. >> >> > Get unparalleled scalability from the best Selenium testing platform >> >> > available >> >> > Simple to use. Nothing to install. Get started now for free." >> >> > http://p.sf.net/sfu/SauceLabs >> >> > _______________________________________________ >> >> > Postgres-xc-general mailing list >> >> > Pos...@li... >> >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > |
From: ZhangJulian <jul...@ou...> - 2014-05-25 12:23:59
|
Hi Koichi, Thanks for the explaination. We have a system which has some OLTP applications and some REPORT applications, and the REPORT system can bear some inconsistency. We do not want the REPORT system influencing the statibility of the OLTP system, so read/write separation is applicable in this scenario. From your advices, I feel I should limit the use cases to a smaller scenario, for example, even the GUC is enabled, only the SELECT statement under the autocommit=true could be routed to the slaves. From the other mail thread, the community has planned some other approachs to achive the similar goal. Because our team has no much experience on the development, we plan to train ourselves by this task even it will not be adopted by the community. We may ask help from you if we have some questions, thanks in advance! Thanks Julian > Date: Fri, 23 May 2014 08:52:28 -0400 > Subject: Re: [Postgres-xc-general] Do you think the new feature is meaningful? - Read/Write Separation > From: koi...@gm... > To: jul...@ou... > CC: pos...@li... > > Hello; > > Find my reply inline. > > Thank you; > --- > Koichi Suzuki > > > 2014-05-22 23:49 GMT-04:00 ZhangJulian <jul...@ou...>: > > Hi Koichi, > > > > Thanks for your comments! > > > > 1. pgxc_node issue. > > I feel the pgxc_node in data node have no use currently, right? > > In current codebase, if a coordinator slave or a data node slave is > > promoted, ALTER NODE statement must be executed in all the coordinators > > since the pgxc_node table is a local table in each node. > > Assume the feature is applied, ALTER NODE/CREATE NODE syntax also will be > > updated to update the master and slave together. Once a coordinator slave or > > a data node slave is prompted, the information in other coordinators and the > > prompted coordinator could be updated as the previous behavior. > > I understand your goal and it sounds attractive to have such > master-slave info inside the database. Maybe we need better idea > which survives slave promotion. > > > > > 2. the data between the master and the slave may not be consistency every > > time. > > It should be a common issue on PostgreSQL, and other non-cluster database > > platform. There are many users who use the master-slave infrastructure to > > achive the read/write separation. If the user open the feature, they should > > know the risk. > > The use case should be limited. The transaction has to be read only. > We cannot transfer statement-by-statement basis. Even with > transaction-basis transfer, we may be suffered from such > inconsistency. I'm afraid this may not be understood widely. > Given this, anyway, synchronizing WAL playback in slaves is essential > issue to provide read transaction on slaves. This was discussed in > the cluster summit at PGCon this Tuesday. > > > > > 3. the GXID issue. > > It is too complex to me, I can not understand it thoroughly, :) But if the > > user can bear the data is not consistency in a short time, it will be not a > > issue, right? > > GXID issue is a solution to provide "atomic visibility" among read and > write distributed transactions. It is quite new and may need another > material to understand. Let me prepare a material to describe why it > is needed and what issues this solves. > > This kind of thing is essential to provide consistent database view in > the cluster. > > Please allow me a bit to provide background information on this. > > > > > Thanks > > Julian > > > >> Date: Thu, 22 May 2014 09:21:28 -0400 > >> Subject: Re: [Postgres-xc-general] Do you think the new feature is > >> meaningful? - Read/Write Separation > >> From: koi...@gm... > >> To: jul...@ou... > >> CC: pos...@li... > > > >> > >> Hello; > >> > >> Thanks a lot for the idea. Please find my comments inline. > >> > >> Hope you consider them and more forward to make your goal more feasible? > >> > >> Regards; > >> --- > >> Koichi Suzuki > >> > >> > >> 2014-05-22 4:19 GMT-04:00 ZhangJulian <jul...@ou...>: > >> > Hi All, > >> > > >> > I plan to implement it as the below idea. > >> > 1. add a new GUC to the coordinator configuration, which control the > >> > READ/WRITE Separation feature is ON/OFF. > >> > 2. extend the catalog table pgxc_node by adding new columns: > >> > slave1_host, > >> > slave1_port, slave1_id, slave2_host, slave2_port, slave2_id. Suppose at > >> > most > >> > two slaves are supported. > >> > >> I don't think this is a good idea. If we have these info in the > >> catalog, this will all goes to the slave by WAL shipping and will be > >> used when a slave is promoted. > >> > >> This information is not valid when the master is gone and one of the > >> slaves is promoted. > >> > >> > 3. a read only transaction or the front read only part of a transaction > >> > will > >> > be routed to the slave node to execute. > >> > >> In current WAL shipping, we have to expect some difference when a > >> transaction or statement update is visible to the slave. At least, > >> even with > >> synchronized replication, there's slight delay after the WAL record is > >> received and is replayed to be available to hot standby. There's > >> even a chance that such update is visible before it is visible at the > >> master. > >> > >> Therefore, usecase of current hot standby should allow such > >> differences. I don't think your example allow such WAL shipping > >> replication characteristics. > >> > >> Moreover, current hot standby implementation assumes the slave will > >> receive every XID in updates. It does not assume there could be > >> missing XIDs and this assumption is used to generate snapshot to > >> enforce update visibility. > >> > >> In XC, because of GXID nature, some GXID may be missing at some slave. > >> > >> At present, because we didn't have sufficient resources, snapshot > >> generation is disabled. > >> > >> In addition to this, local snapshot may not work. We need global XID > >> (GXID) to get consistent result. > >> > >> By such reasons, it is not simple to provide consistent database view > >> from slaves. > >> > >> I discussed this in PGCon cluster summit this Tuesday and I'm afraid > >> this need much more analysis, research and design. > >> > >> > > >> > For example, > >> > begin; > >> > select ....; ==>go to slave node > >> > select ....; ==>go to slave node > >> > insert ....; ==>go to master node > >> > select ....; ==>go to master node, since it may visit the row inserted > >> > by > >> > the previous insert statement. > >> > end; > >> > > >> > By this, in a cluster, > >> > some coordinator can be configured to support the OLTP system, the query > >> > will be routed to the master data nodes; > >> > others coordinators can be configured to support the report system, the > >> > query will be routed to the slave data nodes; > >> > the different wordloads will be applied to different coordinators and > >> > data > >> > nodes, then they can be isolated. > >> > > >> > Do you think if it is valuable? Do you have some advices? > >> > > >> > Thanks > >> > Julian > >> > > >> > > >> > ------------------------------------------------------------------------------ > >> > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > >> > Instantly run your Selenium tests across 300+ browser/OS combos. > >> > Get unparalleled scalability from the best Selenium testing platform > >> > available > >> > Simple to use. Nothing to install. Get started now for free." > >> > http://p.sf.net/sfu/SauceLabs > >> > _______________________________________________ > >> > Postgres-xc-general mailing list > >> > Pos...@li... > >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >> > |