You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Koichi S. <koi...@gm...> - 2012-08-22 14:19:00
|
Unfortunately, XC does not come with load balancer. I hope static load balancing work in your case, that is, implement connection pooler and assign static (different) access point to different thread. Hope it helps. ---------- Koichi Suzuki 2012/8/22 Nick Maludy <nm...@gm...>: > Koichi, > > Thank you for your insight, i am going to create coordinators on each > datanode and try to distribute my connections from my nodes evenly. > > Does PostgresXC have the ability to automatically load balance my > connections (say coordinator1 is too loaded my connection would get routed > to coordinator2)? Or would i have to do this manully? > > > > Mason, > > I've commented inline below. > > > Thank you both for you input, > -Nick > > On Tue, Aug 21, 2012 at 8:16 PM, Koichi Suzuki <koi...@gm...> > wrote: >> >> ---------- >> Koichi Suzuki >> >> >> 2012/8/22 Mason Sharp <ma...@st...>: >> > On Tue, Aug 21, 2012 at 10:44 AM, Nick Maludy <nm...@gm...> wrote: >> >> All, >> >> >> >> I am currently exploring PostgresXC as a clustering solution for a >> >> project i >> >> am working on. The use case is a follows: >> >> >> >> - Time series data from multiple sensors >> >> - Sensors report at various rates from 50Hz to once every 5 minutes >> >> - INSERTs (COPYs) on the order of 1000+/s >> > >> > This should not be a problem, even for a single PostgreSQL instance. >> > Nonetheless, I would recommend to use COPY when uploading these >> > batches. > > > - Yes our batches of 1000-5000 were working fine with regular Postgres on > our current load. However our load is expected to increase next year and my > benchmarks showed that regular Postgres couldn't keep up with much more than > this. I am sorry to mislead you also, these are 5000 messages. Some of our > messages are quite complex, containing lists of other messages which may > contain lists of yet more messages, etc. We have put these nested lists into > separate tables and so saving one message could mean numerous inserts into > various tables, i can go into more detail later if needed. > >> >> > >> >> - No UPDATEs once the data is in the database we consider it immutable >> > >> > Nice, no need to worry about update bloat and long vacuums. >> > >> >> - Large volumes of data needs to be stored (one sensor 50Hz sensor = >> >> ~1.5 >> >> billion rows for a year of collection) >> > >> > No problem. >> > >> >> - SELECTs need to run as quick as possible for UI and data analysis >> >> - Number of clients connections = 10-20, +95% of the INSERTs are done >> >> by one >> >> node, +99% of the SELECTs are done by the rest of the nodes >> > >> > I am not sure what you mean. One client connection is doing 95% of the >> > inserts? Or 95% of the writes ends up on one single data node? >> > >> > Same thing with the 99%. Sorry, I am not quite sure I understand. >> > > > - We currently only have one node in our network which writes to the > database, so all of the COPYs come from one libpq client connection. There > is one small use case where this isn't true so that's why i said 95%, but to > simplify things we can say only one node writes to the database. > > - We have several other nodes which do data crunching and display > information to users, these nodes do all of the SELECTs. > >> >> > >> >> - Very write heavy application, reads are not nearly as frequent as >> >> writes >> >> but usually involve large amounts of data. >> > >> > Since you said it is sensor data, is it pretty much one large table? >> > That should work fine for large reads on Postgres-XC. This is sounding >> > like a good use case for Postgres-XC. >> > > > > - Our system collects data from several different types of sensors so we > have a table for each type, along with tables for our application specific > data. I would estimate around 10 tables contain a majority of our data > currently. > >> >> >> >> >> My current cluster configuration is as follows >> >> >> >> Server A: GTM >> >> Server B: GTM Proxy, Coordinator >> >> Server C: Datanode >> >> Server D: Datanode >> >> Server E: Datanode >> >> >> >> My question is, in your documentation you recommend having a >> >> coordinator at >> >> each datanode, what is the rational for this? >> >> >> > >> > You don't necessarily need to. If you have a lot of replicated tables >> > (not distributed), it can help because it just reads locally without >> > needing to hit up another server. It also ensures an even distribution >> > of your workload across the cluster. >> > >> > The flip side of this is a dedicated coordinator server can be a less >> > expensive server compared to the data nodes, so you can consider >> > price/performance. You can also easily add another dedicated >> > coordinator if it turns out your coordinator is bottle-necked, though >> > you could do that with the other configuration as well. >> > >> > So, it depends on your workload. If you have 3 data nodes and you also >> > ran a coordinator process on each and load balanced, 1/3rd of the time >> > a local read could be done. >> > > > > - I like your reasoning for having a coordinator on each datanode so we can > exploit local reads. > - I have chosen not to have any replicated tables simply because these > tables are expected to grow extremely large and will be too big to fit on > one node. My current DISTRIBUTE BY scheme is ROUND ROBIN so the data is > balanced between all of my nodes. > >> >> >> Do you think it would be appropriate in my situation with so few >> >> connections? >> >> >> >> Would i get better read performance, and not hurt my write performance >> >> too >> >> much (write performance is more important than read)? >> >> >> > >> > If you have the time, ideally I would test it out and see how it >> > performs for your workload. From what you described, there may not be >> > much of a difference. >> >> There're couple of reasons to configure both coordinator and datanode >> in each server. >> >> 1) You don't have to worry about load balancing between coordinator >> and datanode. >> 2) If target data is located locally, you can save network >> communication. In DBT-1 benchmark, this contributes to the overall >> throughput. >> 3) More datanodes, better parallelism. If you have four servers of >> the same spec, you can have four parallel I/O, instead of three. >> >> Of course, they depend on your transaction. >> >> >> Regards; >> --- >> Koichi Suzuki >> >> So, if you can have >> > >> >> Thanks, >> >> Nick >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> Live Security Virtual Conference >> >> Exclusive live event will cover all the ways today's security and >> >> threat landscape has changed and how IT managers can respond. >> >> Discussions >> >> will include endpoint security, mobile security and the latest in >> >> malware >> >> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> >> _______________________________________________ >> >> Postgres-xc-general mailing list >> >> Pos...@li... >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> >> > >> > >> > >> > -- >> > Mason Sharp >> > >> > StormDB - http://www.stormdb.com >> > The Database Cloud - Postgres-XC Support and Service >> > >> > >> > ------------------------------------------------------------------------------ >> > Live Security Virtual Conference >> > Exclusive live event will cover all the ways today's security and >> > threat landscape has changed and how IT managers can respond. >> > Discussions >> > will include endpoint security, mobile security and the latest in >> > malware >> > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> > _______________________________________________ >> > Postgres-xc-general mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Nick M. <nm...@gm...> - 2012-08-22 13:40:27
|
Koichi, Thank you for your insight, i am going to create coordinators on each datanode and try to distribute my connections from my nodes evenly. Does PostgresXC have the ability to automatically load balance my connections (say coordinator1 is too loaded my connection would get routed to coordinator2)? Or would i have to do this manully? Mason, I've commented inline below. Thank you both for you input, -Nick On Tue, Aug 21, 2012 at 8:16 PM, Koichi Suzuki <koi...@gm...>wrote: > ---------- > Koichi Suzuki > > > 2012/8/22 Mason Sharp <ma...@st...>: > > On Tue, Aug 21, 2012 at 10:44 AM, Nick Maludy <nm...@gm...> wrote: > >> All, > >> > >> I am currently exploring PostgresXC as a clustering solution for a > project i > >> am working on. The use case is a follows: > >> > >> - Time series data from multiple sensors > >> - Sensors report at various rates from 50Hz to once every 5 minutes > >> - INSERTs (COPYs) on the order of 1000+/s > > > > This should not be a problem, even for a single PostgreSQL instance. > > Nonetheless, I would recommend to use COPY when uploading these > > batches. > - Yes our batches of 1000-5000 were working fine with regular Postgres on our current load. However our load is expected to increase next year and my benchmarks showed that regular Postgres couldn't keep up with much more than this. I am sorry to mislead you also, these are 5000 messages. Some of our messages are quite complex, containing lists of other messages which may contain lists of yet more messages, etc. We have put these nested lists into separate tables and so saving one message could mean numerous inserts into various tables, i can go into more detail later if needed. > > > >> - No UPDATEs once the data is in the database we consider it immutable > > > > Nice, no need to worry about update bloat and long vacuums. > > > >> - Large volumes of data needs to be stored (one sensor 50Hz sensor = > ~1.5 > >> billion rows for a year of collection) > > > > No problem. > > > >> - SELECTs need to run as quick as possible for UI and data analysis > >> - Number of clients connections = 10-20, +95% of the INSERTs are done > by one > >> node, +99% of the SELECTs are done by the rest of the nodes > > > > I am not sure what you mean. One client connection is doing 95% of the > > inserts? Or 95% of the writes ends up on one single data node? > > > > Same thing with the 99%. Sorry, I am not quite sure I understand. > > > - We currently only have one node in our network which writes to the database, so all of the COPYs come from one libpq client connection. There is one small use case where this isn't true so that's why i said 95%, but to simplify things we can say only one node writes to the database. - We have several other nodes which do data crunching and display information to users, these nodes do all of the SELECTs. > > > >> - Very write heavy application, reads are not nearly as frequent as > writes > >> but usually involve large amounts of data. > > > > Since you said it is sensor data, is it pretty much one large table? > > That should work fine for large reads on Postgres-XC. This is sounding > > like a good use case for Postgres-XC. > > > - Our system collects data from several different types of sensors so we have a table for each type, along with tables for our application specific data. I would estimate around 10 tables contain a majority of our data currently. > >> > >> My current cluster configuration is as follows > >> > >> Server A: GTM > >> Server B: GTM Proxy, Coordinator > >> Server C: Datanode > >> Server D: Datanode > >> Server E: Datanode > >> > >> My question is, in your documentation you recommend having a > coordinator at > >> each datanode, what is the rational for this? > >> > > > > You don't necessarily need to. If you have a lot of replicated tables > > (not distributed), it can help because it just reads locally without > > needing to hit up another server. It also ensures an even distribution > > of your workload across the cluster. > > > > The flip side of this is a dedicated coordinator server can be a less > > expensive server compared to the data nodes, so you can consider > > price/performance. You can also easily add another dedicated > > coordinator if it turns out your coordinator is bottle-necked, though > > you could do that with the other configuration as well. > > > > So, it depends on your workload. If you have 3 data nodes and you also > > ran a coordinator process on each and load balanced, 1/3rd of the time > > a local read could be done. > > > - I like your reasoning for having a coordinator on each datanode so we can exploit local reads. - I have chosen not to have any replicated tables simply because these tables are expected to grow extremely large and will be too big to fit on one node. My current DISTRIBUTE BY scheme is ROUND ROBIN so the data is balanced between all of my nodes. > >> Do you think it would be appropriate in my situation with so few > >> connections? > >> > >> Would i get better read performance, and not hurt my write performance > too > >> much (write performance is more important than read)? > >> > > > > If you have the time, ideally I would test it out and see how it > > performs for your workload. From what you described, there may not be > > much of a difference. > > There're couple of reasons to configure both coordinator and datanode > in each server. > > 1) You don't have to worry about load balancing between coordinator > and datanode. > 2) If target data is located locally, you can save network > communication. In DBT-1 benchmark, this contributes to the overall > throughput. > 3) More datanodes, better parallelism. If you have four servers of > the same spec, you can have four parallel I/O, instead of three. > > Of course, they depend on your transaction. > Regards; > --- > Koichi Suzuki > > So, if you can have > > > >> Thanks, > >> Nick > >> > >> > >> > ------------------------------------------------------------------------------ > >> Live Security Virtual Conference > >> Exclusive live event will cover all the ways today's security and > >> threat landscape has changed and how IT managers can respond. > Discussions > >> will include endpoint security, mobile security and the latest in > malware > >> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > >> _______________________________________________ > >> Postgres-xc-general mailing list > >> Pos...@li... > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >> > > > > > > > > -- > > Mason Sharp > > > > StormDB - http://www.stormdb.com > > The Database Cloud - Postgres-XC Support and Service > > > > > ------------------------------------------------------------------------------ > > Live Security Virtual Conference > > Exclusive live event will cover all the ways today's security and > > threat landscape has changed and how IT managers can respond. Discussions > > will include endpoint security, mobile security and the latest in malware > > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > _______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Koichi S. <koi...@gm...> - 2012-08-22 02:35:37
|
Hi, I've uploaded the tools pgxc and pgxclocal to sourceforge as pgxc-tools-V_1_0_0.tgz. You can download it from https://sourceforge.net/projects/postgres-xc/files/Utilities/ page. This will help your experience to run Postgres-XC in single server or multiple servers. Enjoy. ---------- Koichi Suzuki |
From: Koichi S. <koi...@gm...> - 2012-08-22 00:16:54
|
---------- Koichi Suzuki 2012/8/22 Mason Sharp <ma...@st...>: > On Tue, Aug 21, 2012 at 10:44 AM, Nick Maludy <nm...@gm...> wrote: >> All, >> >> I am currently exploring PostgresXC as a clustering solution for a project i >> am working on. The use case is a follows: >> >> - Time series data from multiple sensors >> - Sensors report at various rates from 50Hz to once every 5 minutes >> - INSERTs (COPYs) on the order of 1000+/s > > This should not be a problem, even for a single PostgreSQL instance. > Nonetheless, I would recommend to use COPY when uploading these > batches. > >> - No UPDATEs once the data is in the database we consider it immutable > > Nice, no need to worry about update bloat and long vacuums. > >> - Large volumes of data needs to be stored (one sensor 50Hz sensor = ~1.5 >> billion rows for a year of collection) > > No problem. > >> - SELECTs need to run as quick as possible for UI and data analysis >> - Number of clients connections = 10-20, +95% of the INSERTs are done by one >> node, +99% of the SELECTs are done by the rest of the nodes > > I am not sure what you mean. One client connection is doing 95% of the > inserts? Or 95% of the writes ends up on one single data node? > > Same thing with the 99%. Sorry, I am not quite sure I understand. > > >> - Very write heavy application, reads are not nearly as frequent as writes >> but usually involve large amounts of data. > > Since you said it is sensor data, is it pretty much one large table? > That should work fine for large reads on Postgres-XC. This is sounding > like a good use case for Postgres-XC. > >> >> My current cluster configuration is as follows >> >> Server A: GTM >> Server B: GTM Proxy, Coordinator >> Server C: Datanode >> Server D: Datanode >> Server E: Datanode >> >> My question is, in your documentation you recommend having a coordinator at >> each datanode, what is the rational for this? >> > > You don't necessarily need to. If you have a lot of replicated tables > (not distributed), it can help because it just reads locally without > needing to hit up another server. It also ensures an even distribution > of your workload across the cluster. > > The flip side of this is a dedicated coordinator server can be a less > expensive server compared to the data nodes, so you can consider > price/performance. You can also easily add another dedicated > coordinator if it turns out your coordinator is bottle-necked, though > you could do that with the other configuration as well. > > So, it depends on your workload. If you have 3 data nodes and you also > ran a coordinator process on each and load balanced, 1/3rd of the time > a local read could be done. > >> Do you think it would be appropriate in my situation with so few >> connections? >> >> Would i get better read performance, and not hurt my write performance too >> much (write performance is more important than read)? >> > > If you have the time, ideally I would test it out and see how it > performs for your workload. From what you described, there may not be > much of a difference. There're couple of reasons to configure both coordinator and datanode in each server. 1) You don't have to worry about load balancing between coordinator and datanode. 2) If target data is located locally, you can save network communication. In DBT-1 benchmark, this contributes to the overall throughput. 3) More datanodes, better parallelism. If you have four servers of the same spec, you can have four parallel I/O, instead of three. Of course, they depend on your transaction. Regards; --- Koichi Suzuki So, if you can have > >> Thanks, >> Nick >> >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud - Postgres-XC Support and Service > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Mason S. <ma...@st...> - 2012-08-21 20:33:36
|
On Tue, Aug 21, 2012 at 10:44 AM, Nick Maludy <nm...@gm...> wrote: > All, > > I am currently exploring PostgresXC as a clustering solution for a project i > am working on. The use case is a follows: > > - Time series data from multiple sensors > - Sensors report at various rates from 50Hz to once every 5 minutes > - INSERTs (COPYs) on the order of 1000+/s This should not be a problem, even for a single PostgreSQL instance. Nonetheless, I would recommend to use COPY when uploading these batches. > - No UPDATEs once the data is in the database we consider it immutable Nice, no need to worry about update bloat and long vacuums. > - Large volumes of data needs to be stored (one sensor 50Hz sensor = ~1.5 > billion rows for a year of collection) No problem. > - SELECTs need to run as quick as possible for UI and data analysis > - Number of clients connections = 10-20, +95% of the INSERTs are done by one > node, +99% of the SELECTs are done by the rest of the nodes I am not sure what you mean. One client connection is doing 95% of the inserts? Or 95% of the writes ends up on one single data node? Same thing with the 99%. Sorry, I am not quite sure I understand. > - Very write heavy application, reads are not nearly as frequent as writes > but usually involve large amounts of data. Since you said it is sensor data, is it pretty much one large table? That should work fine for large reads on Postgres-XC. This is sounding like a good use case for Postgres-XC. > > My current cluster configuration is as follows > > Server A: GTM > Server B: GTM Proxy, Coordinator > Server C: Datanode > Server D: Datanode > Server E: Datanode > > My question is, in your documentation you recommend having a coordinator at > each datanode, what is the rational for this? > You don't necessarily need to. If you have a lot of replicated tables (not distributed), it can help because it just reads locally without needing to hit up another server. It also ensures an even distribution of your workload across the cluster. The flip side of this is a dedicated coordinator server can be a less expensive server compared to the data nodes, so you can consider price/performance. You can also easily add another dedicated coordinator if it turns out your coordinator is bottle-necked, though you could do that with the other configuration as well. So, it depends on your workload. If you have 3 data nodes and you also ran a coordinator process on each and load balanced, 1/3rd of the time a local read could be done. > Do you think it would be appropriate in my situation with so few > connections? > > Would i get better read performance, and not hurt my write performance too > much (write performance is more important than read)? > If you have the time, ideally I would test it out and see how it performs for your workload. From what you described, there may not be much of a difference. > Thanks, > Nick > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud - Postgres-XC Support and Service |
From: Nick M. <nm...@gm...> - 2012-08-21 14:44:58
|
All, I am currently exploring PostgresXC as a clustering solution for a project i am working on. The use case is a follows: - Time series data from multiple sensors - Sensors report at various rates from 50Hz to once every 5 minutes - INSERTs (COPYs) on the order of 1000+/s - No UPDATEs once the data is in the database we consider it immutable - Large volumes of data needs to be stored (one sensor 50Hz sensor = ~1.5 billion rows for a year of collection) - SELECTs need to run as quick as possible for UI and data analysis - Number of clients connections = 10-20, +95% of the INSERTs are done by one node, +99% of the SELECTs are done by the rest of the nodes - Very write heavy application, reads are not nearly as frequent as writes but usually involve large amounts of data. My current cluster configuration is as follows Server A: GTM Server B: GTM Proxy, Coordinator Server C: Datanode Server D: Datanode Server E: Datanode My question is, in your documentation you recommend having a coordinator at each datanode, what is the rational for this? Do you think it would be appropriate in my situation with so few connections? Would i get better read performance, and not hurt my write performance too much (write performance is more important than read)? Thanks, Nick |
From: Nikhil S. <ni...@st...> - 2012-08-21 12:52:05
|
>> >> I think the pgsql-jobs list has definitely more activity than ever before >> too nowadays. > > This is only US-based stuff ;) True. But even here in India, we can see increasing demand for PostgreSQL expertise. Good for us all eventually, Michael :) Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud |
From: Michael P. <mic...@gm...> - 2012-08-21 12:33:20
|
On Tue, Aug 21, 2012 at 9:20 PM, Nikhil Sontakke <ni...@st...>wrote: > Yup, all of this should be pretty good for PostgreSQL and its > derivatives like XC in the long run! > Other things like MongoDB, MariaDB might attract more users also. Definitely. > I think the pgsql-jobs list has definitely more activity than ever before > too nowadays. > This is only US-based stuff ;) -- Michael Paquier http://michael.otacoo.com |
From: Nikhil S. <ni...@st...> - 2012-08-21 12:21:14
|
Yeah, I retweeted about this too a while back. Yup, all of this should be pretty good for PostgreSQL and its derivatives like XC in the long run! I think the pgsql-jobs list has definitely more activity than ever before too nowadays. Regards, Nikhils On Tue, Aug 21, 2012 at 11:44 AM, Michael Paquier <mic...@gm...> wrote: > Hi all, > > An article I found on the net about the latest evolution of MySQL. > You know that Oracle owns it. Well, they are making the code more and more > closed source by removing test cases from public. > http://blog.mariadb.org/disappearing-test-cases/ > > This is pretty good information not only for Postgres, but systems like > MariaDB or MongoDB. > At some point, this is also good for XC, no? > -- > Michael Paquier > http://michael.otacoo.com > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Michael P. <mic...@gm...> - 2012-08-21 11:57:52
|
On Tue, Aug 21, 2012 at 8:08 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Suzuki-san, > I looked at the script. It's very well written. > > A script inside a contrib module has to be flaxible enough to allow all > kinds of installations of XC. A user may need various combinations of > physical machines, virtual machines and XC components such as coordinator, > datanode and gtm-proxy, standbys for each of these. There can be other > complications like firewalls, VPNs, various network configurations etc. > Every user has his/her own preferences for scripting language. Many > consider bash to be insecure and use c-shell. For those this script won't > be of any use. A script which resides in contrib module has to be flexible > enough to take into consideration all of those combinations and install > components likewise. Writing such a script is a humongous task. So, I don't > think we should add it to contrib module. But, we should add it in our > documentation or provide it on PGXC page. > Yes, I agree with Ashutosh that adding this script into contrib might be a little bit too heavy as it means that we will have to maintain it as the core. It would also mean that we highly encourage (force) users to use such tools and I think we whould let people be free to use what they want. It is at least what we should do as the core team. Hence, adding that in SourceForge uploader should be enough. > > Same might be the reason, why installers are not part of the code > repository. > For the reason I mentionned before perhaps, and also the same reason why postgres has no pure installer inside its core code. > > On Tue, Aug 21, 2012 at 9:57 AM, Koichi Suzuki <koi...@gm...>wrote: > >> Hi, >> >> Enclosed is first version of "pgxc" bash script, where you can >> configure, initialize, start and stop simple Postgres-XC cluster. >> This script does not provide any HA configuration and is intended for >> learning what XC is and how it works. >> >> You can configure XC with any number of servers. GTM should be >> configured in one of such servers (of course, you can configure GTM in >> separate server). GTM proxy, coordinator and datanode are configured >> in each server. For configuration, you can edit configuration >> section of the script. >> >> The script contains documentation as well. >> >> I'd like to have feedback to this kind of tools which encourages users >> to try XC. I'm thinking to add this to contrib module and its >> documentation to Postgres-XC Wiki. I'll write similar script where we >> can configure XC locally. >> >> Regards; >> ---------- >> Koichi Suzuki >> >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Michael Paquier http://michael.otacoo.com |
From: Ashutosh B. <ash...@en...> - 2012-08-21 11:09:04
|
Hi Suzuki-san, I looked at the script. It's very well written. A script inside a contrib module has to be flaxible enough to allow all kinds of installations of XC. A user may need various combinations of physical machines, virtual machines and XC components such as coordinator, datanode and gtm-proxy, standbys for each of these. There can be other complications like firewalls, VPNs, various network configurations etc. Every user has his/her own preferences for scripting language. Many consider bash to be insecure and use c-shell. For those this script won't be of any use. A script which resides in contrib module has to be flexible enough to take into consideration all of those combinations and install components likewise. Writing such a script is a humongous task. So, I don't think we should add it to contrib module. But, we should add it in our documentation or provide it on PGXC page. Same might be the reason, why installers are not part of the code repository. On Tue, Aug 21, 2012 at 9:57 AM, Koichi Suzuki <koi...@gm...>wrote: > Hi, > > Enclosed is first version of "pgxc" bash script, where you can > configure, initialize, start and stop simple Postgres-XC cluster. > This script does not provide any HA configuration and is intended for > learning what XC is and how it works. > > You can configure XC with any number of servers. GTM should be > configured in one of such servers (of course, you can configure GTM in > separate server). GTM proxy, coordinator and datanode are configured > in each server. For configuration, you can edit configuration > section of the script. > > The script contains documentation as well. > > I'd like to have feedback to this kind of tools which encourages users > to try XC. I'm thinking to add this to contrib module and its > documentation to Postgres-XC Wiki. I'll write similar script where we > can configure XC locally. > > Regards; > ---------- > Koichi Suzuki > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Koichi S. <koi...@gm...> - 2012-08-21 07:48:09
|
Hi, Enclosed is a local version of pgxc utility (pgxclocal). You can test XC with only one physical server. Regards; ---------- Koichi Suzuki |
From: Michael P. <mic...@gm...> - 2012-08-21 06:14:16
|
Hi all, An article I found on the net about the latest evolution of MySQL. You know that Oracle owns it. Well, they are making the code more and more closed source by removing test cases from public. http://blog.mariadb.org/disappearing-test-cases/ <http://t.co/GhVG0TTc> This is pretty good information not only for Postgres, but systems like MariaDB or MongoDB. At some point, this is also good for XC, no? -- Michael Paquier http://michael.otacoo.com |
From: Koichi S. <koi...@gm...> - 2012-08-21 04:27:37
|
Hi, Enclosed is first version of "pgxc" bash script, where you can configure, initialize, start and stop simple Postgres-XC cluster. This script does not provide any HA configuration and is intended for learning what XC is and how it works. You can configure XC with any number of servers. GTM should be configured in one of such servers (of course, you can configure GTM in separate server). GTM proxy, coordinator and datanode are configured in each server. For configuration, you can edit configuration section of the script. The script contains documentation as well. I'd like to have feedback to this kind of tools which encourages users to try XC. I'm thinking to add this to contrib module and its documentation to Postgres-XC Wiki. I'll write similar script where we can configure XC locally. Regards; ---------- Koichi Suzuki |
From: Michael P. <mic...@gm...> - 2012-08-19 21:39:03
|
This is really great. Just to remind, SourceForge is going to shut down MediaWiki applications from September. So a relocation was necessary. On Sun, Aug 19, 2012 at 8:40 PM, Koichi Suzuki <koi...@gm...>wrote: > Hi, > > Because Sourceforge is closing its Wiki application, I've moved > Postgres-XC Wiki page to > http://postgresxc.wikia.com/wiki/Postgres-XC_Wiki > > If you'd like to contribute to the contents, please let me know your > Wikia account. > > Regards; > ---------- > Koichi Suzuki > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Michael Paquier http://michael.otacoo.com |
From: Koichi S. <koi...@gm...> - 2012-08-19 11:43:58
|
Hi, I've added Postgres-XC configuration page to http://postgresxc.wikia.com/wiki/Configuration This deals with two cases: 1) XC installation to single local server with two coordinators and two datanodes, 2) XC installation to three servers, one for GTM. Each of the others contains one coordinator and one datanode. Your comments/improvement/further contribution is welcome. Enjoy. ---------- Koichi Suzuki |
From: Koichi S. <koi...@gm...> - 2012-08-19 11:41:03
|
Hi, Because Sourceforge is closing its Wiki application, I've moved Postgres-XC Wiki page to http://postgresxc.wikia.com/wiki/Postgres-XC_Wiki If you'd like to contribute to the contents, please let me know your Wikia account. Regards; ---------- Koichi Suzuki |
From: Michael P. <mic...@gm...> - 2012-08-10 09:27:12
|
Hi all, Sorry for sending this message to multiple mailing lists at the same time, but there were a couple of problems with the permissions of users inside 3 mailing lists we are using for Postgres-XC project. Those mailing lists are the general mailing list ( pos...@li...), the bug mailing list ( pos...@li...) and the hackers mailing list ( pos...@li...). For the sake of transparency to the community, let me explain what was happening... Our mailing lists are managed with mailman on SourceForge. For a reason I did not really understand, a portion of the users of those mailing lists had their messages moderated. This problem has been reported by Nikhil Sontakke but I found that a dozen of users on each mailing list was also impacted. Some users had their messages hidden, digested or moderated, and sometimes multiple filters were even set for the same user... The reason why I couldn't see that problem earlier? Well simply I couldn't receive the admin messages of the mailing lists as I was not registered there as an admin. So I went through all the users' permission and fixed it manually for everybody. This was not a huge work as XC community is not that huge compared to PostgreSQL ;) I am also registered there as an admin now, so there will *perhaps* not be that many problems in the future. Really sorry for the inconvenience. If you got any problems in the future or if you still have problems, you can contact me directly or send an email to the dedicated mailing list, I will take proper action. Best regards, -- Michael Paquier http://michael.otacoo.com |
From: Koichi S. <koi...@gm...> - 2012-08-03 00:34:11
|
Maybe it's the first time to show Postgres-XC in west coast. I'm very excited about the talk and would like to share feedbacks from the audience. Regards; ---------- Koichi Suzuki 2012/8/3 Mason Sharp <ma...@st...>: > I will be giving another Postgres-XC talk, this time in San Francisco > on Aug 7th. More details can be found here: > > http://www.meetup.com/postgresql-1/events/75231762/?a=ea1_grp&rv=ea1 > > Talk Description: > > Introduction to Postgres-XC: Write Scalable Cluster > > Having scalability issues? Attracted to the idea of NoSQL but worried > about consistency and having to rewrite your SQL-based applications? > Postgres-XC version 1.0 was recently released. Come learn what > Postgres-XC is, what use cases make sense for it and what its > limitations are. > > Postgres-XC is a write scalable shared-nothing cluster with > cluster-wide Multiversion Concurrency Control (MVCC) and consistency. > > The talk will be presented by Mason Sharp, one of the original > architects of Postgres-XC and co-founder of StormDB. > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Mason S. <ma...@st...> - 2012-08-02 15:52:16
|
I will be giving another Postgres-XC talk, this time in San Francisco on Aug 7th. More details can be found here: http://www.meetup.com/postgresql-1/events/75231762/?a=ea1_grp&rv=ea1 Talk Description: Introduction to Postgres-XC: Write Scalable Cluster Having scalability issues? Attracted to the idea of NoSQL but worried about consistency and having to rewrite your SQL-based applications? Postgres-XC version 1.0 was recently released. Come learn what Postgres-XC is, what use cases make sense for it and what its limitations are. Postgres-XC is a write scalable shared-nothing cluster with cluster-wide Multiversion Concurrency Control (MVCC) and consistency. The talk will be presented by Mason Sharp, one of the original architects of Postgres-XC and co-founder of StormDB. -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud |
From: Michael P. <mic...@gm...> - 2012-08-01 23:49:35
|
On Wed, Aug 1, 2012 at 10:52 PM, Benjamin Henrion <bh...@ud...> wrote: > On Wed, Aug 1, 2012 at 3:31 PM, Michael Paquier > <mic...@gm...> wrote: > > > > On 2012/08/01, at 22:28, Benjamin Henrion <bh...@ud...> wrote: > > > >> On Fri, Jul 27, 2012 at 6:09 PM, Joshua D. Drake <jd...@co...> > wrote: > >>> > >>> Hello, > >>> > >>> That would be very helpful. Thank you for offering. > >> > >> I just installed postgres-xc debian package that's available in SID in > >> an openvz container, now I have this stuff running: > >> > >> ======================================================================== > >> root@sid /var/lib/postgres-xc [13]# ps aux > >> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > >> root 1 0.0 0.0 10588 852 ? Ss 15:20 0:00 init > [2] > >> root 911 0.0 0.0 52584 1624 ? Sl 15:20 0:00 > >> /usr/sbin/rsyslogd -c5 > >> root 922 0.0 0.0 18816 852 ? Ss 15:20 0:00 > /usr/sbin/cron > >> root 929 0.0 0.0 18644 624 ? Ss 15:20 0:00 vzctl: > pts/0 > >> root 930 0.0 0.0 17788 2008 pts/0 Ss 15:20 0:00 -bash > >> 101 8083 0.3 0.1 101552 9140 ? S 15:24 0:00 > >> /usr/bin/postgres -C -D /var/lib/postgres-xc/coord > >> 101 8093 0.0 0.0 101536 1608 ? Ss 15:24 0:00 > >> postgres: pooler process > >> 101 8094 0.0 0.0 101536 1984 ? Ss 15:24 0:00 > >> postgres: writer process > >> 101 8095 0.0 0.0 101536 1812 ? Ss 15:24 0:00 > >> postgres: wal writer process > >> 101 8096 0.0 0.0 102416 3232 ? Ss 15:24 0:00 > >> postgres: autovacuum launcher process > >> 101 8097 0.0 0.0 69496 1700 ? Ss 15:24 0:00 > >> postgres: stats collector process > >> 101 8127 0.3 0.1 101552 9140 ? S 15:24 0:00 > >> /usr/bin/postgres -X -D /var/lib/postgres-xc/datanode1 > >> 101 8135 0.0 0.0 101536 1988 ? Ss 15:24 0:00 > >> postgres: writer process > >> 101 8136 0.0 0.0 101536 1784 ? Ss 15:24 0:00 > >> postgres: wal writer process > >> 101 8137 0.0 0.0 102284 2776 ? Ss 15:24 0:00 > >> postgres: autovacuum launcher process > >> 101 8138 0.0 0.0 69496 1636 ? Ss 15:24 0:00 > >> postgres: stats collector process > >> 101 8145 0.3 0.1 101552 9136 ? S 15:24 0:00 > >> /usr/bin/postgres -X -D /var/lib/postgres-xc/datanode2 > >> 101 8153 0.0 0.0 101536 1984 ? Ss 15:24 0:00 > >> postgres: writer process > >> 101 8154 0.0 0.0 101536 1780 ? Ss 15:24 0:00 > >> postgres: wal writer process > >> 101 8155 0.0 0.0 102284 2772 ? Ss 15:24 0:00 > >> postgres: autovacuum launcher process > >> 101 8156 0.0 0.0 69496 1632 ? Ss 15:24 0:00 > >> postgres: stats collector process > >> root 8781 0.0 0.0 17784 1984 pts/0 S 15:27 0:00 bash > >> root 8792 0.0 0.0 15236 1136 pts/0 R+ 15:29 0:00 ps aux > >> root@sid /var/lib/postgres-xc [14]# > >> ======================================================================== > >> > >> If anybody is interested in a copy of the openvz container (basically > >> a rootfs), let me know, I will push it somewhere. > > That would be cool! > >> > >> From the debian package, how do I configure another box so that the > >> two databases are in master-master mode? > > I am sure Vladimir knows about that, I am not using the Debian packages > at all. > > Basically I end up with the following setup: > > > http://michael.otacoo.com/postgresql-2/start-a-postgres-xc-cluster-in-more-or-less-10-commands/ > > Now I do not understand how master-master can work since the > coordinator is still a SPOF. > OK. I have never used the debian packages so I thought that it installed a small cluster for you automatically like what we can see for postgres on ubuntu for example. When you want to create a cluster with multiple Coordinators, the setting is the same, except that when registering nodes on Coordinators you need to do it for each Coordinator, and you need to register on each Coordinator all the other Coordinators. For example, for a 2Coordinator/2Datanode cluster on the same server 1) Initialize: cd $HOME/pgsql initgtm -Z gtm -D gtm # Initialize GTM initdb -D datanode1 --nodename dn1 # Initialize Datanode 1 initdb -D datanode2 --nodename dn2 # Initialize Datanode 2 initdb -D coord1 --nodename co1 # Initialize Coordinator 1 initdb -D coord2 --nodename co2 # Initialize Coordinator 2 2) Change the port numbers if necessary... 3) start-up gtm -D gtm & # Start-up GTM postgres -X -D datanode1 -i & # Start Datanode 1 postgres -X -D datanode2 -i & # Start Datanode 2 postgres -C -D coord1 -i & # Start Coordinator 1 postgres -C -D coord2 -i & # Start Coordinator 2 4) Define all the nodes on Coordinator 1 and update pooler cache. psql -p $CO1_PORT -c "CREATE NODE dn1 WITH (TYPE='datanode', PORT=$DN1_PORT)" postgres #define dn1 psql -p $CO1_PORT -c "CREATE NODE dn2 WITH (TYPE='datanode', PORT=$DN2_PORT)" postgres #define dn2 psql -p $CO1_PORT -c "CREATE NODE co2 WITH (TYPE='coordinator', PORT=$CO2_PORT)" postgres #define co2 psql -p $CO1_PORT -c "SELECT pgxc_pool_reload()" postgres 5) Define all the nodes on Coordinator 2 and update pooler cache psql -p $CO2_PORT -c "CREATE NODE dn1 WITH (TYPE='datanode', PORT=$DN1_PORT)" postgres #define dn1 psql -p $CO2_PORT -c "CREATE NODE dn2 WITH (TYPE='datanode', PORT=$DN2_PORT)" postgres #define dn2 psql -p $CO2_PORT -c "CREATE NODE co1 WITH (TYPE='coordinator', PORT=$CO1_PORT)" postgres #define co1 psql -p $CO2_PORT -c "SELECT pgxc_pool_reload()" postgres So simply you need to register on each Coordinator all the other nodes of your cluster. -- Michael Paquier http://michael.otacoo.com |
From: Benjamin H. <bh...@ud...> - 2012-08-01 13:52:45
|
On Wed, Aug 1, 2012 at 3:31 PM, Michael Paquier <mic...@gm...> wrote: > > On 2012/08/01, at 22:28, Benjamin Henrion <bh...@ud...> wrote: > >> On Fri, Jul 27, 2012 at 6:09 PM, Joshua D. Drake <jd...@co...> wrote: >>> >>> Hello, >>> >>> That would be very helpful. Thank you for offering. >> >> I just installed postgres-xc debian package that's available in SID in >> an openvz container, now I have this stuff running: >> >> ======================================================================== >> root@sid /var/lib/postgres-xc [13]# ps aux >> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >> root 1 0.0 0.0 10588 852 ? Ss 15:20 0:00 init [2] >> root 911 0.0 0.0 52584 1624 ? Sl 15:20 0:00 >> /usr/sbin/rsyslogd -c5 >> root 922 0.0 0.0 18816 852 ? Ss 15:20 0:00 /usr/sbin/cron >> root 929 0.0 0.0 18644 624 ? Ss 15:20 0:00 vzctl: pts/0 >> root 930 0.0 0.0 17788 2008 pts/0 Ss 15:20 0:00 -bash >> 101 8083 0.3 0.1 101552 9140 ? S 15:24 0:00 >> /usr/bin/postgres -C -D /var/lib/postgres-xc/coord >> 101 8093 0.0 0.0 101536 1608 ? Ss 15:24 0:00 >> postgres: pooler process >> 101 8094 0.0 0.0 101536 1984 ? Ss 15:24 0:00 >> postgres: writer process >> 101 8095 0.0 0.0 101536 1812 ? Ss 15:24 0:00 >> postgres: wal writer process >> 101 8096 0.0 0.0 102416 3232 ? Ss 15:24 0:00 >> postgres: autovacuum launcher process >> 101 8097 0.0 0.0 69496 1700 ? Ss 15:24 0:00 >> postgres: stats collector process >> 101 8127 0.3 0.1 101552 9140 ? S 15:24 0:00 >> /usr/bin/postgres -X -D /var/lib/postgres-xc/datanode1 >> 101 8135 0.0 0.0 101536 1988 ? Ss 15:24 0:00 >> postgres: writer process >> 101 8136 0.0 0.0 101536 1784 ? Ss 15:24 0:00 >> postgres: wal writer process >> 101 8137 0.0 0.0 102284 2776 ? Ss 15:24 0:00 >> postgres: autovacuum launcher process >> 101 8138 0.0 0.0 69496 1636 ? Ss 15:24 0:00 >> postgres: stats collector process >> 101 8145 0.3 0.1 101552 9136 ? S 15:24 0:00 >> /usr/bin/postgres -X -D /var/lib/postgres-xc/datanode2 >> 101 8153 0.0 0.0 101536 1984 ? Ss 15:24 0:00 >> postgres: writer process >> 101 8154 0.0 0.0 101536 1780 ? Ss 15:24 0:00 >> postgres: wal writer process >> 101 8155 0.0 0.0 102284 2772 ? Ss 15:24 0:00 >> postgres: autovacuum launcher process >> 101 8156 0.0 0.0 69496 1632 ? Ss 15:24 0:00 >> postgres: stats collector process >> root 8781 0.0 0.0 17784 1984 pts/0 S 15:27 0:00 bash >> root 8792 0.0 0.0 15236 1136 pts/0 R+ 15:29 0:00 ps aux >> root@sid /var/lib/postgres-xc [14]# >> ======================================================================== >> >> If anybody is interested in a copy of the openvz container (basically >> a rootfs), let me know, I will push it somewhere. > That would be cool! >> >> From the debian package, how do I configure another box so that the >> two databases are in master-master mode? > I am sure Vladimir knows about that, I am not using the Debian packages at all. Basically I end up with the following setup: http://michael.otacoo.com/postgresql-2/start-a-postgres-xc-cluster-in-more-or-less-10-commands/ Now I do not understand how master-master can work since the coordinator is still a SPOF. -- Benjamin Henrion <bhenrion at ffii.org> FFII Brussels - +32-484-566109 - +32-2-3500762 "In July 2005, after several failed attempts to legalise software patents in Europe, the patent establishment changed its strategy. Instead of explicitly seeking to sanction the patentability of software, they are now seeking to create a central European patent court, which would establish and enforce patentability rules in their favor, without any possibility of correction by competing courts or democratically elected legislators." |
From: Michael P. <mic...@gm...> - 2012-08-01 13:31:12
|
On 2012/08/01, at 22:28, Benjamin Henrion <bh...@ud...> wrote: > On Fri, Jul 27, 2012 at 6:09 PM, Joshua D. Drake <jd...@co...> wrote: >> >> Hello, >> >> That would be very helpful. Thank you for offering. > > I just installed postgres-xc debian package that's available in SID in > an openvz container, now I have this stuff running: > > ======================================================================== > root@sid /var/lib/postgres-xc [13]# ps aux > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > root 1 0.0 0.0 10588 852 ? Ss 15:20 0:00 init [2] > root 911 0.0 0.0 52584 1624 ? Sl 15:20 0:00 > /usr/sbin/rsyslogd -c5 > root 922 0.0 0.0 18816 852 ? Ss 15:20 0:00 /usr/sbin/cron > root 929 0.0 0.0 18644 624 ? Ss 15:20 0:00 vzctl: pts/0 > root 930 0.0 0.0 17788 2008 pts/0 Ss 15:20 0:00 -bash > 101 8083 0.3 0.1 101552 9140 ? S 15:24 0:00 > /usr/bin/postgres -C -D /var/lib/postgres-xc/coord > 101 8093 0.0 0.0 101536 1608 ? Ss 15:24 0:00 > postgres: pooler process > 101 8094 0.0 0.0 101536 1984 ? Ss 15:24 0:00 > postgres: writer process > 101 8095 0.0 0.0 101536 1812 ? Ss 15:24 0:00 > postgres: wal writer process > 101 8096 0.0 0.0 102416 3232 ? Ss 15:24 0:00 > postgres: autovacuum launcher process > 101 8097 0.0 0.0 69496 1700 ? Ss 15:24 0:00 > postgres: stats collector process > 101 8127 0.3 0.1 101552 9140 ? S 15:24 0:00 > /usr/bin/postgres -X -D /var/lib/postgres-xc/datanode1 > 101 8135 0.0 0.0 101536 1988 ? Ss 15:24 0:00 > postgres: writer process > 101 8136 0.0 0.0 101536 1784 ? Ss 15:24 0:00 > postgres: wal writer process > 101 8137 0.0 0.0 102284 2776 ? Ss 15:24 0:00 > postgres: autovacuum launcher process > 101 8138 0.0 0.0 69496 1636 ? Ss 15:24 0:00 > postgres: stats collector process > 101 8145 0.3 0.1 101552 9136 ? S 15:24 0:00 > /usr/bin/postgres -X -D /var/lib/postgres-xc/datanode2 > 101 8153 0.0 0.0 101536 1984 ? Ss 15:24 0:00 > postgres: writer process > 101 8154 0.0 0.0 101536 1780 ? Ss 15:24 0:00 > postgres: wal writer process > 101 8155 0.0 0.0 102284 2772 ? Ss 15:24 0:00 > postgres: autovacuum launcher process > 101 8156 0.0 0.0 69496 1632 ? Ss 15:24 0:00 > postgres: stats collector process > root 8781 0.0 0.0 17784 1984 pts/0 S 15:27 0:00 bash > root 8792 0.0 0.0 15236 1136 pts/0 R+ 15:29 0:00 ps aux > root@sid /var/lib/postgres-xc [14]# > ======================================================================== > > If anybody is interested in a copy of the openvz container (basically > a rootfs), let me know, I will push it somewhere. That would be cool! > > From the debian package, how do I configure another box so that the > two databases are in master-master mode? I am sure Vladimir knows about that, I am not using the Debian packages at all. Thanks, Michael > > Best, > > -- > Benjamin Henrion <bhenrion at ffii.org> > FFII Brussels - +32-484-566109 - +32-2-3500762 > "In July 2005, after several failed attempts to legalise software > patents in Europe, the patent establishment changed its strategy. > Instead of explicitly seeking to sanction the patentability of > software, they are now seeking to create a central European patent > court, which would establish and enforce patentability rules in their > favor, without any possibility of correction by competing courts or > democratically elected legislators." |
From: Michael P. <mic...@gm...> - 2012-08-01 13:29:04
|
On 2012/08/01, at 21:09, Ashutosh Bapat <ash...@en...> wrote: > Can Development group members twit on this channel (of course twits about XC only). The goal of this bot is only automatic git commit information and perhaps release info, not more. You can still use your personal twitter channel for things you want to tell about xc. Don't forget the hash tag #pgxc! > > On Wed, Aug 1, 2012 at 5:36 PM, Michael Paquier <mic...@gm...> wrote: > Hi all, > > I spent some time today to setting up a twitter account for Postgres-XC project. > Here is more about it: > - Twitter username: @PostgresXCBot > - Twitter URL: http://twitter.com/PostgresXCBot > > This twitter thread will be used to send information about Postgres-XC like releases or official information. > Also, it acts as a Git commit bot, meaning that each time a commit is done in Github repository a twit the commit is sent automatically. > This way you can easily follow the latest development of Postgres-XC. > Thanks, > -- > Michael Paquier > http://michael.otacoo.com > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
From: Benjamin H. <bh...@ud...> - 2012-08-01 13:28:24
|
On Fri, Jul 27, 2012 at 6:09 PM, Joshua D. Drake <jd...@co...> wrote: > > Hello, > > That would be very helpful. Thank you for offering. I just installed postgres-xc debian package that's available in SID in an openvz container, now I have this stuff running: ======================================================================== root@sid /var/lib/postgres-xc [13]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 10588 852 ? Ss 15:20 0:00 init [2] root 911 0.0 0.0 52584 1624 ? Sl 15:20 0:00 /usr/sbin/rsyslogd -c5 root 922 0.0 0.0 18816 852 ? Ss 15:20 0:00 /usr/sbin/cron root 929 0.0 0.0 18644 624 ? Ss 15:20 0:00 vzctl: pts/0 root 930 0.0 0.0 17788 2008 pts/0 Ss 15:20 0:00 -bash 101 8083 0.3 0.1 101552 9140 ? S 15:24 0:00 /usr/bin/postgres -C -D /var/lib/postgres-xc/coord 101 8093 0.0 0.0 101536 1608 ? Ss 15:24 0:00 postgres: pooler process 101 8094 0.0 0.0 101536 1984 ? Ss 15:24 0:00 postgres: writer process 101 8095 0.0 0.0 101536 1812 ? Ss 15:24 0:00 postgres: wal writer process 101 8096 0.0 0.0 102416 3232 ? Ss 15:24 0:00 postgres: autovacuum launcher process 101 8097 0.0 0.0 69496 1700 ? Ss 15:24 0:00 postgres: stats collector process 101 8127 0.3 0.1 101552 9140 ? S 15:24 0:00 /usr/bin/postgres -X -D /var/lib/postgres-xc/datanode1 101 8135 0.0 0.0 101536 1988 ? Ss 15:24 0:00 postgres: writer process 101 8136 0.0 0.0 101536 1784 ? Ss 15:24 0:00 postgres: wal writer process 101 8137 0.0 0.0 102284 2776 ? Ss 15:24 0:00 postgres: autovacuum launcher process 101 8138 0.0 0.0 69496 1636 ? Ss 15:24 0:00 postgres: stats collector process 101 8145 0.3 0.1 101552 9136 ? S 15:24 0:00 /usr/bin/postgres -X -D /var/lib/postgres-xc/datanode2 101 8153 0.0 0.0 101536 1984 ? Ss 15:24 0:00 postgres: writer process 101 8154 0.0 0.0 101536 1780 ? Ss 15:24 0:00 postgres: wal writer process 101 8155 0.0 0.0 102284 2772 ? Ss 15:24 0:00 postgres: autovacuum launcher process 101 8156 0.0 0.0 69496 1632 ? Ss 15:24 0:00 postgres: stats collector process root 8781 0.0 0.0 17784 1984 pts/0 S 15:27 0:00 bash root 8792 0.0 0.0 15236 1136 pts/0 R+ 15:29 0:00 ps aux root@sid /var/lib/postgres-xc [14]# ======================================================================== If anybody is interested in a copy of the openvz container (basically a rootfs), let me know, I will push it somewhere. >From the debian package, how do I configure another box so that the two databases are in master-master mode? Best, -- Benjamin Henrion <bhenrion at ffii.org> FFII Brussels - +32-484-566109 - +32-2-3500762 "In July 2005, after several failed attempts to legalise software patents in Europe, the patent establishment changed its strategy. Instead of explicitly seeking to sanction the patentability of software, they are now seeking to create a central European patent court, which would establish and enforce patentability rules in their favor, without any possibility of correction by competing courts or democratically elected legislators." |