/srv/irclogs.ubuntu.com/2018/07/24/#juju.txt

veebersexternalreality: the check merge job runs the tests in a xenial container, we should be able to tell which version of mongodb it's running easily03:38
* veebers checks03:38
veebersexternalreality: I can see it's installing juju-mongodb3.2 from xenial-updates (this matches with what I see in the make file install-deps)03:39
babbageclunkI think it probably depends which series the controller is running.03:40
veebersaye, we have mongodb-server-core, but that's bionic/cosmic only03:43
veebersbabbageclunk: if I've deployed an app under an alias (i.e. juju deploy cs:ubuntu blah) is it possible, in code, to map blah -> cs:ubuntu? (i.e. app name to charm name)04:03
babbageclunkveebers: sorry, didn't see this - yeah, an application has a charm, which has a url.04:15
veebersHmm, I can see a way to get name -> charmurl04:15
babbageclunkveebers: What context are you in?04:15
veebershah sweet04:15
veebersbabbageclunk: Deciding what extra meta data needs stored and what exists already for doing a resource-get on behalf of a charm04:15
babbageclunkveebers: ah, ok - so in the unit agent?04:16
veebersbabbageclunk: I'm not 100% certain where it will occur, but the unit agent make a lot of sense. I'm just storing the data atm (at deploy-ish time)04:18
babbageclunkoh gotcha04:18
manadartNeed a review for LXD cluster nodes and AZs: https://github.com/juju/juju/pull/896108:15
stickupkidmanadart: i'm on it08:29
stickupkidmanadart: i have a question about the "default" profile, should we be using "default", or should we be using some way to get the profile name?08:30
manadartstickupkid: We'll be discussing and evolving that in the course of designing profile/device pass-through.08:31
stickupkidmanadart: perfect, just reading your PR and that cropped up08:31
stickupkidmandart: slightly OT, i was looking into cleaning up some lxd tests yesterday and noticed this https://github.com/juju/juju/blob/develop/provider/lxd/environ.go#L4608:33
stickupkidmanadart: turns out we never send the provider to the environ - so we should clean that up and remove it08:34
manadartstickupkid: Yes; I also added some TODOs in my PR for moving logging/checking. Created a card for it yesterday.08:35
stickupkidmanadart: perfect :)08:35
stickupkidmanadart: done - LGTM08:42
manadartstickupkid: Thanks.08:42
stickupkidmanadart: "github.com/juju/juju/cmd/juju/machine.TestPackage" that package is failing constantly now08:43
manadartstickupkid: Will look.08:43
stickupkidmanadart: I've made a bug for it https://bugs.launchpad.net/juju/+bug/178328408:47
mupBug #1783284: Intermittent unit test failure: machine.TestPackage <juju:Incomplete> <https://launchpad.net/bugs/1783284>08:47
stickupkidI had a look last week, but I wasn't able to work out why we don't get any real good stack trace of the error08:48
stickupkidIn fact i tried to run the go test with -count=64 to try and force it into failing, but it didn't fail locally at all08:48
stickupkidI've got a theory (probably wrong), that it's failing on a tear down08:51
manadartstickupkid: Ack.08:54
stickupkidmanadart: how much do we want to refactor the lxd tests?09:45
stickupkidthe goal being, removing stubs?09:45
manadartstickupkid: That is my position, but maybe too much to bite the whole lot off as a single exercise.09:46
stickupkidyeah, that's what i'm thinking, maybe incremental steps...09:47
manadartstickupkid: Another one: https://github.com/juju/juju/pull/896410:51
stickupkidmanadart: approved, much better - less branching10:54
manadartstickupkid: We should probably start back-porting LXD provider commits to 2.4 as well...10:55
manadartstickupkid: Ta.10:55
stickupkidmanadart: yeah make sense to me10:55
rick_h_morning party folks11:50
manadartrick_h_: Howdy.11:58
rick_h_bdx: ping when you're about12:36
rick_h_kwmonroe: same to you please12:36
magicaltroutkwmonroe has the holiday blues13:15
magicaltrouthe's gone on strike13:15
rick_h_manadart: doh13:16
rick_h_oops magicaltrout13:16
rick_h_magicaltrout: do you have any use for lxd profile edits that we can/should be aware of for big data charms so we can spec/make it nice and awesome?13:16
=== plars_ is now known as plars
kwmonroerick_h_: strike is over, i have returned from holiday.  we don't customize lxd profiles in big data charms today, but k8s does.  see this for the rigamaroll: https://github.com/juju-solutions/bundle-canonical-kubernetes/wiki/Deploying-on-LXD.  if/when big data charms need gpu accel in containers, or has a process that needs /proc or /sys, they might need something similar to the cdk profile.14:38
kwmonroerick_h_: any config or command that makes editing lxd profiles easier would be nice and awesome.14:39
kwmonroe(i mean, juju config/command)14:39
rick_h_kwmonroe: k, just wanted to check in.14:40
fallenouro/14:42
fallenourIm looking to build a high availability juju controller set (3) as a individual model, whats the best way to do this? I want to use a model so I can take advantage of the permissions controls14:43
rick_h_fallenour: I'm confused. So basically you bootstrap, juju switch controller, juju enable-ha and you get three api servers setup and running14:45
rick_h_fallenour: then any models you create/use are using those HA api servers in the controller model there14:45
rick_h_fallenour: is there something else you're looking for?14:45
fallenourrick_h_: Correct, I want to enable ha, but I also want to control how thast accessed. I dont remember how I did it last time, but I set up a model, and inside that model I built all of my controllers. Im not sure how I did that14:46
rick_h_fallenour: so the controller is the api server and then you can juju add-model and use juju add-user to create users and then juju grant... those users access to different models14:46
rick_h_fallenour: that's how the layering functions14:46
fallenourI guess a better way to put it, unless the models are divided, everyone that has access to the model can make any changes to any system inside that model. I want the controllers to be apart from model A, inside model B, and only give access to Model B to admins14:47
rick_h_fallenour: so each model can be granted access indepdently14:47
fallenourrick_h_: Correct. My issue is, if  I enable-HA inside the normal model, the controllers will also be inside that same model, model A, and not inside of model B,14:47
rick_h_fallenour: so enable-ha only does one thing. It brings up more machines into the controller model. (juju switch controller) if you juju status on that model you can see the new machines listed14:48
fallenourrick_h_: So you are saying all Ill need to do is simply create a new model, and then activate HA inside of that new model, model B, and that will do it?14:48
rick_h_fallenour: enable-ha can never touch any other model14:48
fallenouryes14:48
fallenourYES!14:48
rick_h_fallenour: I'm saying that enable'ing HA in any other model than the controller model doesn't make sense/work14:48
fallenourOk, so we are on the same page then, so the Model B (controller model) already exists upon deployment?14:48
rick_h_fallenour: yes, bootstrap comes out of the box with a controller model for the controller bits (e.g. HA api servers) and a default model which you can use to deploy workloads/etc14:49
fallenourso switch over to Model B(Controller model) and THEN do juju enable-ha, correct?14:49
stickupkidmanadart: https://github.com/juju/juju/pull/8965 these are just some clean up, that we should land, we can always work on more in the future14:49
rick_h_fallenour: the controller model can never be removed14:49
rick_h_fallenour: it only goes away with destroy-controller command14:49
rick_h_fallenour: but the default model can be removed, new ones added, etc14:49
fallenourok so that explains a lot14:49
manadartstickupkid: Ack.14:49
rick_h_fallenour: so it sounds like I'd suggest you bootstrap, juju switch controller, juju enable-ha, juju remove-model default, and then follow https://docs.jujucharms.com/2.4/en/tut-users from there14:50
fallenourrick_h_: Ok, so just rebuild from scratch, all the way back from bootstrap?14:53
rick_h_fallenour:  not sure what you've got so I can't say15:03
rick_h_fallenour: I mean all of that is possible at any time15:03
bdxthose docs are looking real nice15:08
bdxthank you to everyone who put effort into migrating/making the new docs.jujucharms.com15:10
bdxits beautiful15:10
stickupkidmanadart: i get the comments, i think my PR is a better first step, before tackling the arch feature...15:11
stickupkidrick_h_: help update for LXD - https://github.com/juju/juju/pull/896615:12
plarsHi, I have a strange error from a unit. We tried to remove the application, and it shows up as terminated, but it won't go away. In debug-log I see this:15:14
plarsmachine-0: 23:06:58 ERROR juju.worker.dependency "unit-agent-deployer" manifold worker returned unexpected error: failed to query services from dbus for application "jujud-unit-sru20170725637-1": Failed to activate service 'org.freedesktop.systemd1': timed out15:14
manadartstickupkid: You can lick the arch feature in less code than it takes to shim it out here.15:14
plarsanyone seen something like that? The current version of juju in that model is 2.2.9 but it won't let me upgrade it15:14
stickupkidmanadart: ok, i'll drop that commit15:15
manadartstickupkid: We also don't need to shim out IsSupportedArch, because we can always just return a mock arch that gets the return that we want from that method.15:15
manadartstickupkid: I am thinking to ice my logging PR too for now. What we discussed with rick_h_ means putting back some mess that we took away :(15:17
stickupkidmanadart: dropped that commit, so it's just a simple update to the provider15:18
rick_h_manadart: do we need to change the plan?15:20
rick_h_plars: no, that's a new one to me.15:22
plarsrick_h_: any suggestions on debugging or repairing it?15:22
rick_h_plars: looking for existings bugs atm to see if there's something more to help15:23
plarsthanks!15:23
rick_h_plars: and coming up empty...15:24
rick_h_plars: can you file a new bug with details on version/cloud/what was running/etc please?15:24
rick_h_plars: I mean it might be some cleanup step race condition but I'm not sure. The fact that it's 2.2.9 makes me :( but the issue is that if you can't upgrade then double :/15:25
plarsrick_h_: sure, tbh I'm not sure how it got in this state. We've been running very stable for a long while and got a similar error one a unit when trying to deploy. Then it started having this on an existing unit that we tried to get rid of15:25
plarsrick_h_: I'd be happy to upgrade, but it gives me an error that it can't because of that unit15:25
plarsERROR some agents have not upgraded to the current model version 2.2.9: unit-sru20170725637-115:26
rick_h_plars: ugh, yea it's so tough to debug this stuff. Maybe we can see if we can upgrade around it or force it in some way15:26
rick_h_plars: is the machine that the application on still there?15:26
rick_h_plars: e.g. can we juju remove-machine --force to help put some pressure on things?15:26
plarsrick_h_: yes - it's maas, but I can't remove the machine because it hosts a lot of other applications/units15:26
rick_h_plars: yea, that's what I was worried about15:26
rick_h_plars: can they be migrated off?15:26
rick_h_plars: I guess no, but figure I'll ask15:26
plarsrick_h_: on another model, I also have a machine that I can't remove, even with --force15:27
rick_h_stickupkid: one thought sorry, can you verify that the current help text there conforms to that template we got a while ago?15:27
rick_h_stickupkid: just to make sure that while we're in there we bring it up to standard across the whole thing15:27
plarsrick_h_: on that one, the maas machine that it was once using is gone. It just seems to silently fail15:27
rick_h_plars: ? that seems odd. --force with remove-machine is a pretty big hammer that usually doesn't fail unless something is really odd15:28
plarsand the machine never disappears. On that one, the whole model can go if there's an easier way to force that15:28
plars2        down   10.101.49.149  nx38gq   xenial  default  Deployed15:28
plarsis how it shows up15:28
rick_h_plars: no, we're looking to add some add-model force bits to 2.5 this cycle but I don't have it yet15:28
rick_h_sorry, remove-model --force bits15:28
plarsthat one is stuck on 2.0.2 - and can't update for the same reason. No units are even deployed15:28
rick_h_plars: is it in a happy state? e.g. does the agent report ok?15:29
rick_h_plars: I'm curious of model-migrations can be used to help garden up to the later versions with fixed bugs15:29
rick_h_stickupkid: https://docs.google.com/document/d/1ySjCNqd0x6veLfcBetxLI9NH7qfw3xayLWJNqXMvyW8/edit specifically15:29
stickupkidrick_h_: sure let me look15:31
plarsrick_h_: the agent for the one where I can't remove the machine?15:31
rick_h_plars: right, but it shows it still there?15:31
plarsrick_h_: yes, if I do juju status on that model, it shows the machine is there. In reality, that's the only machine left in the model, and it's gone15:32
rick_h_plars: oh I see15:32
plarsrick_h_: https://bugs.launchpad.net/juju/+bug/1783357 - please tell me if you need any other information, or have suggestions for debugging15:47
mupBug #1783357: Failed to activate service 'org.freedesktop.systemd1': timed out <juju:New> <https://launchpad.net/bugs/1783357>15:47
fallenouranyone in here have a good contact with the maas team? Its really becoming frustrating that the system im working with keeps trying to change all of its configuration instructions mid-build. It completely defeats the purpose of asking me what domain, IP, storage config, etc if you are just going to randomly generate all of that, and change it all, including hardware zone.15:49
pmatulisfallenour, consider documenting your situation and sending to the juju mailing list15:52
fallenourwhats the flag option for acquiring devices in a specific zone with juju?18:39
fallenouris it --zone=<zonelocation>18:39
rick_h_fallenour: yea check out https://docs.jujucharms.com/2.4/en/charms-deploying-advanced under "deploy --to"18:55
pmatulisfallenour, 'zone' is a placement directive. it can be used whenever juju spawns a machine (commands: deploy, add-unit, add-machine, bootstrap)18:56
fallenouris it --zone=<zonelocation>18:56
fallenoursorry, meant to send that earlier18:56
pmatulisfallenour, it is dependant on your chosen cloud provider18:57
fallenourwow, im losing my mind. Ok, next question. So ive got ha enabled, thank you very much fro that btw rick_h_ rebuilding was definitely the smarter decision, id like to integrate docker. I did just notice there was a remove-k8s, which makes me think theres a segment for kubernetes and docker. Can you provide some enlightment for me on that?18:58
fallenourpmatulis: cloud provider is maas.18:58
rick_h_fallenour: heh, so now you're on the bleeding edge of stuff. There's work going on to enable Juju models on k8s but it's kind of "soft launch" as things like storage and features are in progress.18:59
rick_h_fallenour: definitely not ready for production infrastructure yet unfortunately18:59
fallenourOh! Also, for those deploying charms, if they keep pushing because "its not working" with maas, tell them to check to see if they imported bionic (18.04LTS). It hung up my openstack deployment because of that.18:59
rick_h_fallenour: however, if you have the time/resource definitely play with it and see if it fits your needs18:59
rick_h_fallenour: ah, definitely. Having the right MAAS images is vital18:59
fallenourrick_h_: You know, as much as I might hate myself in the morning, you guys have helped me a ton. Ive got 3 extra servers. Ill see if I cant contribute a few pints of blood and a few more bpm to help with testing it.19:00
fallenouralso, the new bionic boot, it looks amazing. im not even gonna lie, its beautiful19:00
rick_h_fallenour: all up to you, I just want you to know where stuff sits and as we build stuff for users to solve problems it's always <3 to get feedback that we're on the right track19:00
fallenourits like 4-6 fonts smaller.19:00
rick_h_lol19:01
fallenourSo rick_h_ pmatulis I was thinking. Id like to build a web app cluster with a separate docker cluster, both backed by ceph storage clusters, so the applications and containers can store across the ceph storage drive array across the 3 servers each, what are your thoughts?19:12
fallenourfrom my perspective, it should give all the apps and containers access to a total of about 1.8 TB storage space, with the ability to easily swap out the drives to increase size. Are there any risks I should be aware of, and am I overlooking anything?19:13
rick_h_fallenour: you're stepping into kwmonroe and tvansteenburgh's expertise there. I'm not sure19:15
rick_h_basically can you setup that ceph cluster as a storage provider for kubernetes and deploy in that way?19:15
fallenourkwmonroe: tvansteenburgh Can you two provide some insight? Itll be the first time ill be combining ceph storage with docker.19:17
fallenourrick_h_: yea its a pretty interesting idea, especially if it pans out. completely flexible application deployment with completely flexible storage.19:17
tvansteenburghfallenour: if you're using kubernetes for your "docker cluster" then integrating with ceph will be pretty straightforward19:21
tvansteenburghif you're not, then i have no idea19:21
fallenourtvansteenburgh: Um...can you clarify? Wait, easier question, is there a kubernetes for juju?19:24
tvansteenburghfallenour: yah, `juju deploy canonical-kubernetes`19:25
tvansteenburghor, for a minimal version `juju deploy kubernetes-core`19:25
fallenourtvansteenburgh: how many machines does it take? I only currently have 3 physical set aside for it, is that enough?19:25
tvansteenburghfallenour, then you want kubernetes-core19:26
kwmonroedang it tvansteenburgh, i knew that one.  fallenour, here's some details on both of those: https://jujucharms.com/canonical-kubernetes/ (takes 9 machines) and https://jujucharms.com/kubernetes-core (takes 2 machines)19:26
fallenourtvansteenburgh: I was just reading up on it as well, it seems a lot of thought into building it. Yea I just found that one19:26
fallenourOOH MY GAWD 9 MACHINES!?19:26
kwmonroe9 times the fun19:26
fallenourkwmonroe: Slow your roll there google, we poor people over here.19:27
fallenourLOL19:27
kwmonroe:)19:27
kwmonroefwiw fallenour, the big bundle is meant to represent a production cluster, so you have 3 etcd units, 3 workers, 2 masters.. thars 8 right thar.19:28
tvansteenburghfallenour: for the poor people we have microk8s: https://github.com/ubuntu/microk8s19:28
fallenourkwmonroe: Any reason why I cant cram those onto 3 boxes?19:28
fallenourkwmonroe: tvansteenburgh  I mean, 3 3 and 2.19:28
tvansteenburghfallenour: sure you can19:29
tvansteenburghfallenour: you could use the lxd provider and put it all on one machine19:29
fallenourtvansteenburgh: Im sensing a downside coming.. o.o19:29
tvansteenburghthere's no downside19:29
kwmonroefallenour: you could certainly adjust the bundle.yaml for canonical-kubernetes (cdk) to change num_units from 3 3 2 to 1 1 1.  but if you're gonna hack it up like that, you may as well use kubernetes-core (which we've hacked/condensed for you).19:30
knobbyfallenour: if that one machine falls over you're out of luck19:30
fallenourkwmonroe: knobby tvansteenburgh No no no, I mean put 3 3 and 2 on 3 physical machines, one on each for each machine. so 1 1 and 1 2 2 and 2 3 and 319:30
tvansteenburgha game of sudoku spontaneously broke out19:31
fallenourso spreading all 8 over 3 machiens instead of 8 machines. I know 8 is a lot more beefy, but the demand on 8 machines for someone like myself wont reach justifying 8 for a while19:31
fallenourOoh, yea, so A, B, C Machines, etcd1 worker1 and master1 on machine A, etcd2, worker2, and master2 on machine B, and etcd3 and worker3 on machine C19:32
fallenoursorry19:32
knobbyfallenour: the only downside to reducing the number of machines is that you lose the highly available part of it to a degree. I'm running it and not using 9 machines. I smashed etcd onto the (single)master I have and 2 of the workers19:32
fallenourknobby: well you would still keep the HA, just spread it across less hardware.19:32
fallenourthe odds of that many boxes dying at the same time without a serious issue occuring is really low.19:33
knobbyfallenour: but if you lose 2 machines you're lost. In the 9 machine setup, it would be ok19:33
knobbyI completely agree, fallenour and I am making the same gamble locally19:33
fallenourknobby: yea but again, the odds of even losing 2 machines at the same time at a moderate load is still really low. like im buying a lotto ticket low, and ill see you all on my yacht. I got that much better odds.19:34
fallenourmake that two yachts then XD19:34
fallenourknobby: but yea, I mean whats the best way to build that into a yaml?19:34
kwmonroefallenour: i would start with the kubernetes-core yaml (https://api.jujucharms.com/charmstore/v5/kubernetes-core/archive/bundle.yaml), adjust it so there are 3 machines in the machines section, keep easyrsa as is (in a lxc container on machine 0), bump up num_units for etcd and k8s-worker to 2 (or 3, or whatever), and adjust those "to:" directives to be like "to: - '0'  - '1' - '2'" as you want.19:37
knobbyfallenour: just use the lxd stuff like kubernetes-core has. I'm not a master of the --to stuff unfortunately.19:37
kwmonroefallenour: you'd effectively be making a bundle somewhere between core and cdk.  the only thing i'd be careful of is to ensure the k8s-master and workers are on different machines.. so like easyrsa+master on machine 0, etc+worker on machines 1 and 2.19:39
fallenourknobby: kwmonroe yea Im working on it now, ill let you guys take a look once i get it done. Feel free to let me know your thoughts once I get it done.19:40
fallenourif you guys like it, ill publish it.19:40
knobbyeh, in a low usage scenario I'm not worried about mixing masters and slaves on the same hardware. But then again, I live dangerously...19:40
fallenouralright, so I got it done19:45
fallenourknobby: kwmonroe tvansteenburgh Im curious though, do you think I shoudl go ahead and build in ceph into it?19:46
knobbyfallenour: if that was the end goal, I would19:47
fallenourknobby: whats the best way to integrate it? shoudl I just toss it in anywhere, or do I need it to establish a specific relationship?19:49
fallenourkwmonroe: tvansteenburgh rick_h_ Do I just kinda "toss" ceph osd/mon onto the pile, and its "good"19:52
kwmonroefallenour: you're out of my league there -- i haven't used ceph myself.19:57
fallenourkwmonroe: saaaadness. YOU WERE THE CHOOOSEN ONE!19:58
kwmonroeyou guy buy 6 more machines, and i'll google how to use ceph ;)19:58
fallenourkwmonroe: LOOOL19:58
fallenourkwmonroe: Yes...."buy"....*pulls out lightsaber and laser pistols*19:59
knobbyfallenour: you need some monitors, they are like kubernetes master, and then machines with disks, which are the osd part20:03
knobbyyou probably want to run osd/mon on each machine is my guess20:03
knobbyfallenour: I have a PR up to allow relating ceph to kubernetes and getting a default storage class for free so you can just make persistent volume claims and get them backed by ceph.20:11
magicaltroutkwmonroe: we've been charming up https://livy.incubator.apache.org/ in our efforts to get Hue up and running. Whilst not part of the Big Top stack, would you like us to eventually stick it in bigdata-charmers or just hold on to it?20:39
kwmonroemagicaltrout: i'm cool with you holding it.  the only benefit for putting it into bd-charmers would be that we would auto build/release it when things like layer-basic changes.  if you own it, i'll just open an issue reminding you guys to push it yourselves (which is what i do for giraph)20:41
magicaltroutfair enough20:41
magicaltroutdue to a myriad of wifi issues we didn't have a call in the end but i did shoot uros a bunch of questions which he half answered with promises of grandeur and so on20:42
magicaltrouti'll forward it on20:42
magicaltrouthe also said he wished he could grow a beard like rick_h_ but sadly his child like features prevent it....20:44
rick_h_LoL20:45
rick_h_Naw, he's got that wise man sans beard thing going20:46
rick_h_Eternal scholar20:46
magicaltroutoh, i thought he just couldn't be bothered going to the hairdressers20:46
veebersbabbageclunk, anyone: have you seen that odd github tls issue in any PR over the last day?21:17
babbageclunkveebers: no, I didn't see it yesterday21:18
babbageclunkveebers: although it looks like check-merge jobs are failing at the point of launching the container21:20
babbageclunkand merge jobs too.21:21
babbageclunkveebers: eg http://ci.jujucharms.com/job/github-check-merge-juju/2518/console21:22
babbageclunkI'm having a look on grumpig now21:22
veebersbabbageclunk: oh :-\ ok thanks, let me know what you find. Thanks re: tls issue21:23
babbageclunkveebers: I can't launch a new lxd container on it, getting this: https://paste.ubuntu.com/p/CJS4rKtJbb/21:25
veebersbabbageclunk: I would just reboot grumpig for a start, easiest and laziest way to debug :-)21:26
babbageclunkok21:26
veebersbabbageclunk: there are a couple of things you need to do firest21:26
babbageclunkcool21:26
cory_fuwallyworld: That update to Juju edge did fix the issue I was having, thanks21:38
wallyworldcory_fu: great! pr is lgtm also, looks awesome21:38
cory_fuwallyworld: Great.  I'll get a quick PR together for your charms repo to work with that before I EOD21:39
wallyworldno rush!21:39
veebershey wallyworld o/ welcome back to the sensible timezone :-)21:39
wallyworldindeed21:39
babbageclunkalso sensible hemisphere21:43
veebersanastasiamac: when you have a moment could you review: https://github.com/juju/juju/pull/8346 the part I was specifically interested in is 391-39222:37
veebersanastasiamac: heh, hold off for now, want to make a slight change to it22:53
anastasiamacveebers: oh awesome! good thing i did not look yet then :)23:16
veebers^_^23:17

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!