[04:13] <themonk> I am facing a problem, my conf.tmpl file has some variable comes from config.yaml and last 2 are comes from relation problem is after relation joined if i change configuration then those 2 relation dependent variable gone missing. how do i solve this? need help
[08:08] <_sEBAs_> some soul, can tell me why through juju debug-hooks the `config-get' commands like are not working?
[08:09] <_sEBAs_> please :P
[08:09] <_sEBAs_> Im trying to debug a hook, but its impossible if it can even run the charm commands like "config-get"
[09:00] <yolanda> hi jamespage, stub
[09:01] <jamespage> hey yolanda
[09:01] <stub> So rather than extending the PostgreSQL db relation to be able to specify multiple databases and users, and return multiple passwords to the clients, I was thinking just using multiple relations would be better
[09:02] <stub> Each relation stays simple, one database and user. If your charm needs two, it opens two relations and gets exactly the same information. Less work probably as it doesn't need to decode the more complex data structures.
[09:03] <stub> This also means other charms, like the pgbouncer connection pooler, don't need to be updated to support extensions
[09:04] <yolanda> jamespage, i think it could have sense and looks cleaner actually
[09:04] <jamespage> ok - this sounds reasonable
[09:05] <yolanda> we can apply that to postgresql alternative at the moment, not touch mysql
[09:07] <stub> If it is going to cause scheduling issues, or it just isn't going to work, we can proceed with yolanda's work mimicing mysql's interface. But it will mean compatibility issues with pgbouncer, and we might want to flag the interface extensions as experimental or temporary.
[09:09] <yolanda> stub, i think that at the moment we are fine implementing new approach, postgresql is a new feature so we can work on the best approach
[09:10] <stub> Ok. Let me know how you go or if you want any help. I'd rather be an enabler than a blocker :)
[09:11] <jamespage> yolanda, stub: sorry - ended up with two kids for a minute then!
[09:12] <jamespage> yolanda, lets take this approach - its different to mysql BUT this should only apply in the nova-cloud-controller charm
[09:12] <jamespage> all other charms really only need one db access
[09:12] <yolanda> yes, it's an special case
[09:12] <jamespage> yolanda, nova-compute and quantum-gateway might think they do but they don't
[09:12] <yolanda> what do you mean?
[11:01] <jamespage> yolanda, those two charms I think request multiple db's
[11:01] <jamespage> yolanda, I noticed this during ssl-everywhere - they don't need them
[11:01] <jamespage> nova-compute does not need the db connection at all in later releases
[11:02] <jamespage> quantum-gateway just needs nova
[11:02] <jamespage> yolanda, I fixed both of these in the ssl-everwhere branches - we have alot to land still
[11:02] <yolanda> jamespage, so is that better to use ssl everywhere as base?
[11:03] <jamespage> yolanda, not yet
[11:03] <yolanda> well, dealing with nova-c-c at the moment
[11:11] <jamespage> yolanda, get that up on branches and we can then review for the other ones
[11:12] <yolanda> jamespage, currently working on personal one: lp:~yolanda.robla/charms/precise/nova-cloud-controller/postgresql-alternative
[11:12] <jamespage> yolanda, +1 that's fine
[11:12] <yolanda> doesn't look bad but i have to deal with the neutron postgres settings, still pointing to sqlite
[12:19] <jamespage> yolanda, I proposed https://code.launchpad.net/~openstack-charmers/charm-helpers/active-active/+merge/211285
[12:19] <jamespage> I think that's good now
[12:19] <yolanda> cool! it has been hard
[12:19] <yolanda> did you do any more updates?
[12:28] <yolanda> jamespage, also, about automatically detecting rabbit failures and switch, it works much better on icehouse. In havana i only see nova-compute to work fine
[12:28] <jamespage> yolanda, yeah - I think the kombu in 14.04/icehouse is better at dealing with this
[13:36] <jamespage> yolanda, infact I'm going to merge the active-active branches into the icehouse branches to test
[13:45] <yolanda> ok
[13:45] <yolanda> the way i tested it is using icehouse for rabbit, and havana for the others
[15:04] <timrc> How do I debug: 2014-03-20 15:03:42 ERROR juju.cmd supercommand.go:293 environment has no bootstrap configuration data -- I'm using a fresh config ala juju generate-config and the juju switch'ing to 'local'
[15:12] <Tug> Hi, I found out about juju very recently, looks promising! I have a few questions of my own (so from a juju newbie)
[15:13] <Tug> for instance I'm wondering about the capablities of the local provider
[15:14] <Tug> It probably won't virtualize a load balancer like the one cloud providers usually have (ELB on AWS) right ?
[15:15] <timrc> strace to the rescue
[15:33] <Fishy_> So say I want to use MAAS to deploy a group of servers, and then put 3 or 4 juju LXC containers on each... is this sane, or is mixing MAAS and LXC not good
[15:34] <Fishy_> or do I do an open stack thing
[15:34] <Fishy_> and bake that on top of MAAS servers
[15:36] <Fishy_> basically i think I can turn all of my apps into charms, which is good. trying to figure out how to design the rest
[15:37] <Fishy_> 10% will need to run directly on the hardware.  90% I want to run in LXC, on servers that get set up via MAAS or some kind of magic
[15:39] <bloodearnest> Tug: correct, in fact no environment type can "install" SaaS like ELB. Charms can have support for it explicitly, but I don't think that would work in local provider
[15:40] <bloodearnest> Tug: but the future roadmap has the idea of "virtual charms", which are in your environment and act as a gateway/proxy to SaaS, to you can relate to it, configure it, etc
[15:41] <Tug> bloodearnest, interesting !
[15:41] <Tug> bloodearnest, for now a classic charms won't be able to do any network configuration if I use a local provider or vagrant right ?
[15:43] <bloodearnest> Tug: can you clarify what you mean by network configuration? Some things are likely possible, others not
[15:44] <Tug> ok let's take an exemple, the mongodb cluster charm
[15:44] <Tug> you have to somehow configure each shard and mongos etc
[15:44] <Tug> so each machine has to be aware of the global configuration I guess
[15:46] <bloodearnest> Tug: so, mongodb-cluster is a "bundle" that deploys 5 services using the same charm (mongodb), but configuring and relating them differently
[15:47] <bloodearnest> Tug: the relations add the ip/port config between machines, and each services config controls overall settings for the service
[15:47] <Tug> bloodearnest, ok thx for the explanation
[15:48] <bloodearnest> Tug: so in terms of ip address, there's nothing you need to do, except maybe expose some ports externally
[15:48] <Tug> ok so juju would be able to configure this for each vagrant vm automatically ?
[15:49] <Tug> I guess it works like the MAAS
[15:50] <bloodearnest> Tug: yes, that would work on vagrant/lxc/whatever
[15:51] <Tug> cool, I have to try it then!
[15:51] <bloodearnest> Tug: the relevant provider implementation knows how to setup local networking
[15:51] <Tug> Thx, bloodearnest
[15:51] <bloodearnest> Tug: your welcome
[15:51] <bloodearnest> you're*
[15:52] <bloodearnest> Fishy_: sounds like you want the manual provider in 1.17
[15:52] <bloodearnest> Fishy_: and deploying LXC containers on MAAS controlled machines should work fine
[15:52] <Fishy_> but i only want 1 environment right?
[15:52] <bloodearnest> Fishy_: yes - type: manual
[15:53] <bloodearnest> Fishy_: you add machines/lxcs to it manually
[15:53] <Fishy_> okay
[15:53] <bloodearnest> and then do juju deploy --to=<machine id>
[15:53] <Fishy_> " This is useful if you have groups of machines that you want to use for Juju but don't want to add the complexity of a new OpenStack or MAAS setup"
[15:53] <Fishy_> but im ok with an openstack or MAAS setup
[15:53] <Fishy_> I just need to sometimes deploy to hardware, sometimes to LXC
[15:54] <Fishy_> and I want the hardware to be set up by something, like MAAS
[15:54] <bloodearnest> Fishy_: currently, non-manual single environment can only deploy to a single type of vm abstraction
[15:55] <bloodearnest> e.g openstack, ec2, azure, local container (on same machine, lxc or kvm)
[15:55] <bloodearnest> Fishy_: to mix and match within one env, you will need the manual provider I think
[15:55] <Fishy_> ok so if I go manual, I will need to do all the LXC stuff myself?  what it does in local for me already
[15:55] <bloodearnest> yes, I'm afraid
[15:56] <bloodearnest> Fishy_: local provider will only deploy on the local machine AFAIK
[15:56] <Fishy_> maybe better to just run 2 environments?
[15:56] <Fishy_> and call switch a lot?
[15:56] <timrc> This documentation: https://juju.ubuntu.com/docs/config-LXC.html seems at odds with today's reality
[15:57] <timrc> for example sudo juju bootstrap for local is met with an error claiming you should not bootstrap as root
[15:57] <bloodearnest> Fishy_: it's generally better to have 1 env per "service" I think
[15:57] <bloodearnest> Fishy_: e.g. we have maybe 20+ production environments
[15:58] <Fishy_> well we have 20 services... 1 needs bare metal, 19 can be on LXC
[15:58] <bloodearnest> varying from a half dozen machines to 40 or so
[15:58] <Fishy_> its different things
[15:58] <bloodearnest> Fishy_: yes, ours are all different services. The do interact, but not via juju (yet)
[15:58] <Fishy_> i would just have prod and qa... and then the two types mentioned
[15:59] <Fishy_> hum
[15:59] <bloodearnest> timrc: yeah, that's for 1.16, sounds like  you're on 1.17
[15:59] <Fishy_> i dont need interaction via juju
[15:59] <Fishy_> i just need the machines set up
[15:59] <bloodearnest> Fishy_: in that case I suggest one env per service
[15:59] <bloodearnest> Fishy_: we have 2 per service - staging and prod
[16:00] <timrc> bloodearnest, Correct.. 1.17.4-trusty-amd64
[16:01] <Fishy_> okay so a MAAS service can deploy 20 nodes for me...  5 of those bare metal to install my perf apps, and 15 of them future VM hosts..  how do I do the VM host step?  Make a charm that sets up an openstack or something?
[16:01] <timrc> bloodearnest, How do environments.yaml and environments/*.jenv's jive? The local.jenv file was completely empty and I had to add things by hand... bootstraping is still crashing and burning atm, but I feel like I'm missing some fundamental step here
[16:02] <bloodearnest> timrc: don't mess with environments/*,jenv - they are autogenerated and used for introspecttion by tools
[16:02] <Fishy_> or I guess I could just make everything MAAS.. but then deploy 5 apps to the same MAAS server.. but that seems against the juju style
[16:03] <bloodearnest> Fishy_: there is a well tested openstack set of charms that we use on MAAS to deplpoy openstack
[16:03] <Fishy_> oh okay so I could make my MASS VM host guys use openstack
[16:03] <bloodearnest> timrc: for 1.17, just do juju bootstrap, it will prompt for sudo when needed
[16:03] <Fishy_> then my openstack environment hits that ?
[16:03] <bloodearnest> Fishy_: sounds right
[16:04] <Fishy_> will that use LXC?
[16:04] <timrc> bloodearnest, I didn't want to mess with it but when I went to bootstrap it said their was not bootstrap-config data
[16:04] <Fishy_> under the covers
[16:04] <timrc> bloodearnest, and the local.jenv file was empty :(
[16:04] <bloodearnest> Fishy_: I believe openstack can use lxc as a machine type, yes
[16:04] <bloodearnest> Fishy_: but it defaults to kvm
[16:05] <bloodearnest> Fishy_: there was a thread about this recently on the juju mailing list
[16:05] <Fishy_> or is there an alternative to openstack
[16:05] <Fishy_> that will use lxc
[16:05] <bloodearnest> timrc: try juju destroy-environment local --force && juju bootstrap
[16:06] <bloodearnest> Fishy_: not that I know of
[16:07] <Fishy_> openstack bundle is 19 charms
[16:07] <Fishy_> that seems pretty intense
[16:08] <bloodearnest> Fishy_: openstack is notoriuosly difficult to deploy. Juju is the one of the easiest ways atm, AIUI
[16:08] <bloodearnest> Fishy_: the manual provisioning is not so difficult to use
[16:08] <Fishy_> ya okay that is sounding better
[16:08] <Fishy_> i just like how simple local is
[16:08] <Fishy_> i want to apply that to many machines
[16:09] <bloodearnest> Fishy_: say you spin up 30 lxc containers across your cluster
[16:09] <Fishy_> i always know what box what app will run on
[16:09] <bloodearnest> create a manual juju env
[16:09] <Fishy_> so when app A needs to deploy, I know I want it on VM host 123
[16:10] <bloodearnest> Fishy_: right, so --to is your friend
[16:10] <Fishy_> ya
[16:10] <timrc> bloodearnest, I got some very informative: ERROR exit status 1's
[16:10] <bloodearnest> Fishy_: it's the only way to get fine grained control on placement
[16:11] <bloodearnest> timrc: lols, I get those too sometimes. A bootstrap should still work even when the env is not destroyed 100%, it will just pave over
[16:12] <timrc> I wish
[16:12] <Fishy_> it does blow up hard if you are in the .juju directory
[16:12] <Fishy_> when you destroy
[16:12] <Fishy_> and try to re-bootstrap
[16:12] <timrc> juju bootstrap
[16:12] <timrc> ERROR environment is already bootstrapped
[16:12] <timrc> so it looks like it can't actually destroy the environment
[16:12] <bloodearnest> Fishy_: caveat - the manual provider is considered beta, AIUI
[16:13] <bloodearnest> timrc: so, I've had this on occasion
[16:13] <timrc> this worked reasonably well a year or so ago
[16:13] <timrc> I guess a lot has changed
[16:13] <bloodearnest> timrc: rm -rf ~/.juju/local/*
[16:13] <Fishy_> kind of thinking MAAS now, and just shove 5 apps to the same host and forget about vms
[16:13] <bloodearnest> has worked for me in getting juju to recognise that the env is really actuall dead
[16:13] <Fishy_> going in circles..
[16:14] <timrc> bloodearnest, Oh, I've done that... that will get the bootstrap to start but dies after mongodb db... I'll check the log.. seems to be a bit of a mess
[16:14] <bloodearnest> timrc: nasty.
[16:14] <Fishy_> mongoDB is a mess, I am in the process of migrating away ;)
[16:14] <bloodearnest> have you done an update recently? 1.17.5 is out
[16:15] <timrc> bloodearnest, well I updated today and got 1.17.4
[16:15] <bloodearnest> timrc: right, 1,17,5 must not be in trusty yet
[16:16] <bloodearnest> timrc: so I have encountered this issue of mongo just dieing before
[16:16] <bloodearnest> but don't know the fix
[16:18] <Fishy_> can look in mongo logs?
[16:20] <timrc> apt-get --purge mongodb-server :)?
[16:20] <timrc> er purge*
[16:26] <timrc> well canonistack is back, I think, so I'll switch back to that... really wish I could get lxc/local provider working
[16:27] <Fishy_> installing juju actually crashed on me on day 1
[16:27] <Fishy_> becuase I had mongodb already installed
[16:27] <Fishy_> it totally crapped the bed
[16:28] <Fishy_> but I blame mongo
[16:36] <bloodearnest> Fishy_: so some one just posted to the juju list descriing some thing similar to your problem - was that you? :)
[17:28] <Fishy_> yes
[17:29] <Fishy_> no offense but i wanted hivemind thought
[17:29] <Fishy_> see what else we are missing
[17:43] <bloodearnest> Fishy_: oh no offense taken, there are many people more qualified that me to answer your questions :)
[17:45] <rick_h_> lies bloodearnest or bust!
[17:48] <hazmat> geekmush, ping
[17:49] <hazmat> geekmush, wondering if we can do an interactive session.. i think theres's something getting lost in the translation
[17:54] <geekmush> ?
[17:54] <hazmat> geekmush, re do plugin
[17:54] <geekmush> I havent' had time to swing back around to juju yet, sorry.
[17:55] <sfeole> timrc-afk: 1.17.5 should fix your local provider issues
[17:55] <sfeole> timrc-afk: need to add the juju/devel ppa
[17:55] <hazmat> geekmush, sorry, i was thinking about a different gh ticket... and conversation
[17:56] <geekmush> hazmat:  heh, no problem … I was kinda wondering … then, again, I *could* have been sleep juju'ing ..  :)
[17:58] <Fishy_> im going bold and switching to dev
[17:58] <Fishy_> seems more interesting
[18:00] <marco-traveling> Fishy_: it's pretty stable, and great to use of you are just trying out
[18:01] <marco-traveling> Fishy_: but if you are doing production deployments use stable because that has an upgrade path
[18:01] <Fishy_> ya i need to figure out if juju is the right path still
[18:01] <Fishy_> before I start prod
[18:01] <marco-traveling> Fishy_: then use devel
[18:01] <marco-traveling> it's got all the features coing in the next stable
[18:04] <Fishy_> So tell me about upgrade..  is the idea you update the core binary in your charm, but without blowing away and recreating the entire VM?
[18:16] <timrc> I (seemingly) randomly started getting: 2014-03-20 18:03:19 WARNING juju.environs open.go:258 failed to write bootstrap-verify file: cannot make Swift control container: failed to create container: 595054e0e7b048cb87887a0b3d7bc663 when I attempt to bootstrap to an openstack provider... has anyone seen this? I currently have 1.16.5-trusty-amd64 but same thing with 1.17.4 -- I juju init'ed a fresh JUJU_HOME... the folks that manage swi
[18:16] <timrc> ft don't seem to think its a problem on their end... at least other users are not reporting the prolem
[20:08] <onrea> What is the public address of Wordpress when installed by juju?
[20:08] <onrea> ==> http://askubuntu.com/q/436975/152405
[20:08] <onrea> lazyPower: ^
[20:10] <lazyPower> onrea: your AU question references discourse, and you're asking about wordpress
[20:10] <lazyPower> i'm confused on which you want answered
[20:10] <onrea> It's because people are not familiar with Discourse
[20:11] <onrea> However, I'm trying with wordpress, too. same result
[20:12] <onrea> They will pass the question when see 'Discourse' in the title!
[20:38] <lazyPower> hatch: something went awry with this installation of discourse that onrea posted about however - his port 80 was never opened... i dont see anything about a hook error
[20:38] <lazyPower> i suspect what happened is it ran into an error, and it was juju resolved without any actual intervention being done.
[20:39] <hatch> lazyPower oh I didn't even notice that - I just assumed that he was running local so it couldn't open it
[20:39] <lazyPower> it will open it regardless if its done, you'll still see it int he status output
[20:40] <hatch> lazyPower ahh I see your comment, cool I'll take a look in a second
[20:43] <hatch> upvoted
[20:43] <hatch> thanks for expanding
[20:59] <lazyPower> thanks for the upvote :-)
[21:03] <dpb1> hi -- in ec2, I'm boostrapping with some storage created in the same region.  It's attaching to my bootstrap node automatically.  Is this expected?
[21:13] <marco-traveling> the discourse charm is broken
[21:14] <marco-traveling> there's is a reason why it's not in the store yet
[21:16] <lazyPower> marco-traveling: ah that would explain it
[21:16] <marco-traveling> it's not compatible with latest upstream
[21:18] <lazyPower> it has a rev installation target though right? and defaults to that tag?
[21:18] <lazyPower> iys been about 4 months since i dove into it last
[21:18] <marco-traveling> yes. so you could use it with an older working version
[21:19] <lazyPower> ok ill ammend the comment to reflect those details after dinner
[21:19] <lazyPower> ty marco
[21:46] <lazyPower> dpb1: well, that is odd.
[21:46] <lazyPower> dpb1: i cant say that I have bootstrapped with storage in teh same region though - you mean an EBS volume provisioned, but "free" in the listing right?
[22:03] <dpb1> lazyPower: ya, that's right
[22:03] <dpb1> lazyPower: has to be in the same AZ even
[22:04] <lazyPower> ok, let me see if i can reproduce
[22:04] <lazyPower> what version of juju are you on?
[22:10] <_2_Heben2> Mario
[22:10] <_2_Heben2> Mario
[22:13] <marco-traveling> Luigi
[22:13] <marco-traveling> Luigi!
[22:23] <lazyPower> dpb1: well, it didnt auto map for me
[22:24] <dpb1> :(
[22:31] <dpb1> I'll see if I can get it more reproducable...
[22:31] <lazyPower> dpb1: stable or unstable series of juju?
[22:35] <dpb1> 1.16.x
[22:35] <lazyPower> ok, i'm running the unstable series too, so that's another factor to consider.
[22:38] <dpb1> lazyPower: thx, I appreciate you trying.  I'll reply to that email thread if I find anything more concrete.
[22:57] <lazyPower> dpb1: np. happy to help
[23:07] <davecheney> marco-traveling: o/
[23:08] <davecheney> what would it take for you to promulgate the mysql charm for trusty
[23:08] <davecheney> ?
[23:08] <marco-traveling> davecheney: tests
[23:08] <marco-traveling> davecheney: I've almost got them all written, should be landing next week
[23:09] <marco-traveling> then we just need to run the tests against a trusty bootstrap
[23:09] <marco-traveling> and if it passes I can promulgate
[23:09] <marco-traveling> davecheney: but as for physical limitations, there are none
[23:09] <marco-traveling> we /could/ do it right now
[23:11] <davecheney> marco-traveling: /would/ you do it for me ?
[23:11] <davecheney> I have environments which have not valid precise images
[23:11] <davecheney> so I need trusty charms
[23:20] <marco-traveling> davecheney: ehhhhhhhh I would love to, really I would, but it kind of flys in the face of the whole "no trusty charms without tests"
[23:20] <marco-traveling> davecheney: what about another charm, like memcached, rabbitmq-server, mediawiki, etc?
[23:21] <marco-traveling> those have tests that we could spin up against a trusty series
[23:23] <davecheney> marco-traveling: i'll take anything you have
[23:23] <davecheney> right now i ahve the ubuntu charm
[23:23] <davecheney> and that a convicing demo doth not make
[23:29] <marco-traveling> davecheney: sure, I'm on an airplane right now, but I can have about 4 or 5 charms promulgated to trusty after they pass tests on trusty tomorrow morning (which kind of sucks for you) so maybe tonight if I have the energy for it
[23:30] <sarnold> heh, 'traveling' is quite specific then :)
[23:30] <davecheney> marco-traveling: oh right
[23:30] <davecheney> you literally are traveling
[23:30] <davecheney> whatever you can promulgate would be awesome
[23:30] <davecheney> currently of course deploying  from local:
[23:30] <marco-traveling> davecheney: while we're here, can you verify if float is a valid configuration type?
[23:31] <davecheney> marco-traveling: good question
[23:31] <davecheney> my intial answer is no
[23:31] <davecheney> only because I've never seen it used
[23:31] <marco-traveling> its' in the docs, but I don't trust that
[23:31] <marco-traveling> yeah, mine as well
[23:31] <marco-traveling> I thought it was only int, string, bool
[23:31]  * marco-traveling would love enum
[23:31] <davecheney> marco-traveling: i'd bet a small developing enconomy that any instances of floats are actually "0.1"
[23:32] <davecheney> lemmie check
[23:34] <davecheney> marco-traveling: looking at the code
[23:35] <davecheney> ints, bools, strings
[23:35] <davecheney> and maps and lists composed of those primatives
[23:35] <marco-traveling> davecheney: cool, I'll update the docs
[23:35] <davecheney> marco-traveling: let me test this
[23:35] <davecheney> but yes, no floats
[23:35] <davecheney> they should be expressed as strings
[23:35] <marco-traveling> davecheney: wait, what? we can do lists?
[23:35] <davecheney> the net effect will be almost the same
[23:35] <marco-traveling> davecheney: sure, I get that
[23:35] <marco-traveling> but what is this maps and lists you speak of
[23:36] <davecheney> marco-traveling: it's the same logic that ingests environments.yaml
[23:36] <jose> hey marco-traveling, do you know if there's a way for a charm related to another one to say 'hey, I want you to generate another user named abc'?
[23:36] <davecheney> they aren't available inside metadata.yaml
[23:36] <davecheney> sorry
[23:36] <davecheney> i mean you can write them
[23:36] <davecheney> but it won't validate
[23:37] <davecheney> and won't make it through config-get
[23:37] <marco-traveling> davecheney: oh, right, i see
[23:37] <marco-traveling> davecheney: I'll open a feature request for an ENUM config option, doubt it'll go anywhere, but having strict set of values would be nice imo
[23:38] <marco-traveling> jose: you could do that in the interface, if you create one
[23:38] <jose> do you have an example? trying to do that for postfix and the reddit charm I will *try* to write
[23:38] <marco-traveling> jose: so, if you're creating an interface, just have a key that accepts either a comma seperated list of users
[23:39] <marco-traveling> then the charm can chop the list up and figure out which which isn't created and which has, etc
[23:39] <jose> is that possible in Bash?
[23:46] <marco-traveling> jose: yes
[23:46] <jose> I'm checking the relation docs atm, I just found out about relation-set