You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
1
|
2
(1) |
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
(3) |
12
|
13
|
14
|
15
|
16
|
17
(4) |
18
|
19
|
20
|
21
|
22
|
23
|
24
(20) |
25
(8) |
26
(22) |
27
|
28
(2) |
29
(3) |
30
|
31
(3) |
|
|
|
From: Nikhil S. <ni...@st...> - 2012-10-28 06:03:23
|
> Without HA, we might someday go out of business - without some claim to > scalability, we can't get into business to begin with. > > Rightly said. > Can you (1) do a full dump, then (2) kill, drop and rebuild the > cluster, and then (3) restore the entire cluster using pg_restore (or psql > .. < dumpfile) through a coordinator? This would be a last resort, > obviously, since I'd lose all the data on every datanode since the last > full dump, but if I know I can do that, at least I know I have that option. > > Yeah, this is possible and it works. Once I tested this by modifying pg_hba.conf to disallow all application connections. Then a did a dump of global objects like users, roles etc. followed by dumps of each of the databases that I had in my XC cluster. One can also use pg_dumpall if the current cluster size is not too large. And all this was done by pointing to a single coordinator, but everything was consistent at the cluster level. And a subsequent pg_restore/psql to populate a new cluster worked pretty well. > as long as I'm going through a coordinator, the effects of the sql > statements should replicate or distribute across the cluster depending on > how the table was set up.. right? > > Yes, the above is correct. > I can probably create a temporary table on a single datanode, through a > coordinator, just by telling it to distribute that table, and only list the > one datanode I want it on, right? Then I can do a data-only restore, of > just that table, then from there I can use it through a coordinator, and > affect whatever other tables I need to. > > Yeah, the above is very much possible. See Michael's blog on this: http://michael.otacoo.com/postgresql-2/pgxc-data-distribution-in-a-subset-of-nodes/ > Hmm, so I wonder what I actually would do if a datanode went down, or if > the gtm server went down. > GTM server can be configured with a GTM-standby and you can failover to the same in case of issues with the GTM. A datanode can be configured with synchronous/asynchronous replicas and again one can failover to the same in case of issues. Using an HA framework and some PGXC tools that are being worked upon will help automate this in the coming days. HTH, Nikhils > > On Fri, Oct 26, 2012 at 10:54 AM, David Hofstee <pg...@c0...> wrote: > >> ** >> >> 1. No cluster without HA option; I agree. >> 2. Integrate XC into PG; In the future I would like to think of a >> single PG instance as a 1-node cluster-able db. >> >> I think PGXC is the best thing that is happening. PGXC deserves to be the >> most usable in the world too (instead of mysql). Gtx, >> >> >> David >> >> Vladimir Stavrinov schreef op 2012-10-26 14:46: >> >> On Thu, Oct 25, 2012 at 4:13 AM, Michael Paquier >> <mic...@gm...> wrote: >> >> 1) It is not our goal to oblige the users to user an HA solution or >> another, >> >> Sounds fine. Where are those users? Who wants cluster without HA? >> Everybody when hears word "cluster" implies "HA" >> >> Postgres code with XC. One of the reasons explaining that XC is able to >> keep up with Postgres code pace easily is that we avoid to implement >> solutions in core that might impact unnecessarily its interactions with >> Postgres. >> >> You are heroes. How long You can continue "code pace" on this hard >> way? This paradigm prevents You do not implement not only HA but lot >> of other things that is necessary for cluster. I never saw this type >> of fork. I believe at some point You will either become a part of >> Postgres or totally come off and go Your own way. The only question >> is when? And best answer is "right now". >> >> >> >> >> ------------------------------------------------------------------------------ >> The Windows 8 Center >> In partnership with Sourceforge >> Your idea - your app - 30 days. Get started! >> http://windows8center.sourceforge.net/ >> what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/ >> >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > ------------------------------------------------------------------------------ > WINDOWS 8 is here. > Millions of people. Your app in 30 days. > Visit The Windows 8 Center at Sourceforge for all your go to resources. > http://windows8center.sourceforge.net/ > join-generation-app-and-make-money-coding-fast/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Shavais Z. <sh...@gm...> - 2012-10-28 01:36:06
|
On Fri, Oct 26, 2012 at 4:01 PM, Vladimir Stavrinov <vst...@gm...>wrote: > On Fri, Oct 26, 2012 at 12:36:25PM -0700, Roger Mayes wrote: > > > they ever were to, I can press a button and have another server up, > > of the exact same configuration, in minutes flat, restored, > > potentially, to an image created the previous night. We can > > This is good for standalone system, but with cluster those images of all > nodes should be synchronized. > > Well, the point would be to get a replacement server going, for the server that died, with all the software installed and the configuration set up, after which my hope has been that we'd be able to reinitialize the database on that host and perform some kind of recovery process to get it back up and working within the cluster. But maybe that requires some of the HA features that you're talking about that XC doesn't have working yet? In the 90's I was an "master's certified" Oracle DBA / Unix Admin, with special training on "tuning the whole system", blah blah blah. I setup and managed various HA clusters, like HP's MC Service Guard, and Sequent's HA clustering stuff, together with Oracle's database clustering, which was all based on shared scsi, and fddi and all that. Ancient history, yes, but I'm not completely without any clue about HA concepts in general, it's just kind of a brave new/old world for me, with all this poor man's Open Source stuff, and with rented cloud servers and so on. > More over cluster inside virtual machines is something exotic. If all > Your nodes are running on the same hardware host, what for You need > cluster? Well, the hardware they have at these pseudo-cloud datacenters is all sitting on a combination of copper and fiber backplanes, that connect a number of cpu's, drives, power supplies, memory boards, et etc. They try to eliminate single points of failure except for the backplanes; they can do more-or-less transparent hot swapping of drives, power supplies, maybe even cpu boards, depending on which phase of their whole set up your hosts are running on. (They're always upgrading and moving their stuff forward, and once in a while they'll coordinate with you to move your hosts off of older hardware onto their newer stuff.) We have QOS guarantees for a certain amount of processing capacity, IO bandwidth, and Network bandwidth and so on per host, depending on how we configure them, and our experience so far with these virtual hosts has been that the TPS and bandwidth levels are pretty consistent. It's measurable, we occasionally run some (sometimes very quick and dirty) benchmarks to make sure, because once or twice (in 3 years) we've caught them in a misconfiguration in their equipment, after some maintenance they did, that was limiting us incorrectly. But they have a maximum amount of processing, memory, and IO bandwidth that they can give us per host, and sometimes we need more than that in order handle the response to something like, say, a Facebook post by Taylor Swift. (Boom - millions of hits in the space of a few minutes. Unbelievable. Quite a spectacular thing to watch.) [ Can you imagine being so famous that with a one-line facebook wall post, you can single-handedly crash (well, load down to the point of being essentially halted) any single host setup? hahaha jeezuz h. It's a good thing I'm not, I think it would Drive Me Nutz. Look out, don't have a bad facial expression on for 1 second where any camera might happen to catch you. Millions of teenagers will have their idealic image of you shattered. The men in white coats would come for me after about a week. (They probably should have long since dragged me away, actually, but thankfully I'm not famous enough for them to know it. I'm whistling dixie at the moon, waving my arms maniacally and dancing by under their radar, lol. http://www.youtube.com/watch?v=GUfS8LyeUyM http://www.youtube.com/watch?v=5yO_P0ZmuBc http://www.youtube.com/watch?v=uq-gYOrU8bA YouTube. Ok, it's awash in visual, symantic and synaptic spam, but it's infinitely better than the old juke box, and for practically no cost at all. We truly do live in the "age of miracles and wonders". Hey, what better place than here to share bizarre play lists with a select group of complete strangers? I remind you of a song? Bring it on.) ] Anyway, ideally what we'd like is to have a nominal setup that we run most of the time, that we can expand as quickly and as transparently as possible to a much larger setup that we run for a few hours or a few days at a time, that we then shrink back to our "business as usual" setup. But if we have to, we can have a bunch nominally configured hosts, that we can scale up (in terms of the number of cpu's, the memory size, the network and io bandwidth, etc) in advance of a marketing blitz (.ie, one Taylor Swift facebook post, lol), and then back down afterwards, and if it costs us an hour or so of down time in each direction, that's not too big of a deal for us. > Run standalone system without virtual machine and You've got > more capacity. Or simply pay for more resources for single virtual > machine. The same if Your XC nodes are running on different hardware > nodes: You will use small part of its resources for which You have paid. > What for do You need virtual machines there? They are needed for Your > provider to share resources with his customers, but not for You. > > I am running XC on virtual machines but for testing and debugging only. > > > HA is important for us, scalability is definitely the more pressing > > matter: Without HA, we might someday go out of business - without > > some claim to scalability, we can't get into business to begin > > with. > > I really enjoy with Your maxim. Your philosophy is applicable for > everything existing in this wonderful world. We can't loose something we > don't have. So first we want to have, then we don't want to loose. And > it is not about priorities, it is about "be or not to be". There even > nice song exists about this. But what You wrote below proves: You need > HA first of all. > > > Can you do log shipping, hot backups, and recover a cluster to a > > point in time? If not what is the quickest/best backup/recovery > > procedure? Whatever it is, it is something I'll need to get > > scripted and working (I mean I'll write > > logs should be handled on every node, it is not so simple. > > Yeah, I was thinking this was probably the case. So what I'm not sure of is what you do after your datanode has been recovered as far as you can get it recovered using the usual single database recovery techniques - how do you then get it back into the cluster, and get it up to speed and running in the cluster, etc. I'm imagining it can be done? Hopefully it's just a matter of reading some docs and doing some experimentation. But I don't think that proves we need HA more than we need scalability, or even as much. We can launch without any working recovery plan if we have to. But it does us no good to launch if we can't handle the load. If I'm wearing my old DBA hat, I know I'm slitting my throat saying that - a DBA / Unix Admin "is only as good as their backups". That's certainly the truth. But I'm not concerned about the security of my DBA role, in fact I've been trying hard to cast it off for ages, actually. But as a business man - I need a throat to cut before I can cut it. The risk of a crash is small and tolerable, but if I'm not convinced I'll be able to handle the load - that's a show stopper. But it's seems like, for the most part, the important scalability features are there at this point, right? So very next on the list I would think would be HA. And it sounds like the XC devs are working fairly feverishly on it. |