/srv/irclogs.ubuntu.com/2018/09/14/#ubuntu-server.txt

smoserJimmy06: did you try running again ?00:30
smoserthere a good shot it will work after a reboot.00:30
=== pleia2_ is now known as pleia2
ansibleesxi666Hello all, i need to build a linux server for build purpose , i have some issue with Ubuntu 16 "server" ISO -  boot with UEFI on HP DL 360  Gen 10 box with RAID 0 ssd disks-   Ubuntu 16 "desktop"cd works well , so does it matter much if i use server or desktop?02:37
tomreynansibleesxi666: you can install desktop and convert it to a server installation, but i wuld recommend installing the server installation instead (since converting isn't that easy). by the way, you are probably referring to ubuntu "16.04 LTS", there is no "ubuntu 16"02:42
tomreynansibleesxi666: what is the error message you run into when you try to install using the server installer?02:43
leftyfbtomreyn: all of this has been discussed in #ubuntu where they asked this same exact question02:43
tomreynoh, maybe ansibleesxi666 should refer to those answers then02:43
ansibleesxi666yes 16.04.05 is what i am using , i see a strange issue, i have 3 SSD disk on box, each with 1 TB, we want to use raid 0 for perf, when i boot box with ISO of server  the OS install shows disk at 600 GB, when i try desktop iso it shows correct size of 3 TB02:45
ansibleesxi666disk as 600 GB i mean02:46
ansibleesxi666as i was in rush i used desktop ISO & build them for now02:46
ansibleesxi666i did google as it says both desktop & server use the same Kernel so i went ahead with Desktop02:48
tomreynansibleesxi666: so did you ask these same questions in #ubuntu before?02:49
ansibleesxi666i did , i did not got answer so i am here02:51
tomreynleftyfb seems to suggest it was diuscussed there before, maybe you missed some reponses (left early)?02:52
tomreyn!irclogs02:52
ubottuOfficial channel logs can be found at https://irclogs.ubuntu.com/ . LoCo channels are now logged there too. Meetingology logs at https://ubottu.com/meetingology/logs/02:52
tomreynif you really got no replies (please point me to when it was previously dsicussed) i'll be happy to go over it with you again.02:55
=== Raboo_ is now known as Raboo
ansibleesxi666let me check03:13
ansibleesxi666No i did not got respond what i was looking for03:13
ansibleesxi666the last 2 conversation was :-03:14
ansibleesxi666(7:41:48 PM) leftyfb: ansibleesxi666: when one of those drives goes bad, you will lose all your data03:14
ansibleesxi666(7:42:33 PM) dxb left the room (quit: Ping timeout: 245 seconds).03:14
ansibleesxi666(7:43:26 PM) ansibleesxi666: our build team wants less time & they do not care of the data on the build box as the actual build goes in central git repo.... these build nodes are purly for compuete ... but starnage issue is why server iso shows disk size as 600 GB & not 3 TB03:14
ansibleesxi666brb03:14
ansibleesxi666i did more google & i think i have work-around  ie in my case a desktop iso or server will not impact much  as the core kernel is same03:30
ansibleesxi666thanks for your time03:31
cpaelzergood morning05:05
lordievaderGood morning06:03
Jimmy06smoser: I tried more then 10 times with the same config07:38
muhahaHas anyone experience with on-prem landscape ?09:58
waveformmuhaha, what's up?11:17
muhaha@waveform did you try to dockerize this big thing ?11:17
waveformmuhaha, there's juju charms for it but I don't think there's any (official) docker images11:18
blackflowisn't containerizing it.... contrary to its purpose?11:18
muhahaWhy?11:19
waveformnot necessarily - containers don't *have* to be many to a machine11:19
waveformfor instance, the juju charms are pretty flexible - they typically set up one machine for haproxy, another for pg, etc. etc. - now those might be "real" machines, or they could just be containers11:20
muhahaI will try to dockerize it, but I will have to understand to this landscape thingy..11:20
muhahaI dont understand how to start this..  For example -quickstart package is useless for this. I will need to use landscape-server11:21
waveformyou're probably better off looking through what the quickstart script actually does - but trying it out on juju will give you a better idea of what a production setup really looks like11:22
waveform(we don't recommend quickstart for scalable production deployments)11:22
blackflowmuhaha: in that it has to control the entire machine and dockerizing it is isolating it? or am I misunderstanding something here11:23
muhahaI dont need to control host machine11:28
muhahaI need to control other ones...11:29
muhahaThat is why I need some gateway... I does not matter if its running on bare-metal or in container11:29
blackflowoh, I wasn't aware you could install the server on premises. I thought it was strictly SaaS.11:30
muhahaunfortunatelly there is no alternative for ubuntu (opensource) to manage other servers :(  So I will have to use landscape. Foreman can not handle this afaik11:34
muhaha*selfhosted11:35
blackflowmuhaha: there's always salt stack11:41
muhahaof course there are also ansible and chef11:47
blackflowansible is extremely too slow crap. good for simple setups but as you scale up in complexity of config, it's way way way too slow.11:49
blackflowbut the point being, if you want analogous to landscape then saltstack and chef (and puppet) is more appropriate because of the client-server model and ability for clients to trigger events between them.11:50
=== MannerMan_ is now known as MannerMan
=== jdstrand_ is now known as jdstrand
rharperJimmy06: are you running the 18.04.1 release of the installer?14:33
tekgeek1205So I have a 10G fiber connection that im trying to use for the physical connection to an OpenvSwitch. I need the host to have access to the same interface to serve as a NAS on that port. Im having problems with assigning a static ip address. DHCP works fine but isn't an option, the DHCP server will be a pfSense KVM. I've tried both ports in DHCP and they work fine. I tried both with17:12
tekgeek1205static configs and I can't get a DNS. I can still ping 8.8.8.8 but not google.com. Im sure its just a simple mistake caused by my lack of knowledge. Here is my interfaces file: https://pastebin.com/5v9YvE1p17:12
nacc_tekgeek1205: what version of ubuntu?17:24
tekgeek120518.0417:24
tekgeek1205server, i reverted from netplan back to ifupdown17:24
nacc_tekgeek1205: you can talk to `systemd-resolve` to see how it's resolving hostnames?17:28
nacc_tekgeek1205: what is in your /etc/resolv.conf?17:28
tekgeek1205checking17:29
tekgeek1205nameserver 127.0.0.53....... so its not getting a DNS?17:30
nacc_tekgeek1205: that's systemd-resolve17:30
nacc_*systemd-resolved17:30
nacc_tekgeek1205: so you need to ask `systemd-resolve --status` what it is using17:30
masonAh, I was curious how to get systemd-resolved to spit that out.17:31
tekgeek1205do you want that in past in?17:31
nacc_tekgeek1205: yeah, that's probably useful17:31
tekgeek1205the first was the resolve.conf17:31
nacc_tekgeek1205: or you can just read it, to see what for that iface is listed as 'DNS Servers'17:32
masonNot sure why they don't populate and maintain a text comment in /etc/resolv.conf since that's where people are going to look. Or even just a comment in that file telling people how to dig out the relevant status.17:32
nacc_mason: i mostly agree with you :)17:32
masonnacc_: I skipped the obvious first pick, "don't do that".17:32
tekgeek1205https://pastebin.com/1QUQReM717:33
tewardtekgeek1205: is that *all* you are getting?17:34
tewardthere's usually other lines than just that17:34
tekgeek1205yeah with a fresh boot and a static address....17:34
teward> static address17:34
tewarddid you set DNS record data in your netplan config?17:34
tewardand if so, what is it?17:35
tekgeek1205no im using ifupdown, netplan and openvswitch are incompatible17:35
tewardthen did you set dns-nameservers in ifupdown?17:35
tekgeek1205yeah....https://pastebin.com/5v9YvE1p17:36
tekgeek1205thats, my interfaces file17:36
masondns-nameservers in ifupdown don't negate systemd-resolved jumping in17:36
masonMy hope is that it uses the interfaces information, but I'm not sure17:36
tewardit might not be doing that properly, that's a systemd headache though.17:37
tewardyou can force your system to use the other DNS resolvers, but you'd have to fuss around with some ResolveD config files to do it17:37
masonFor my part, I found that purging resolvconf helped.17:37
masonI haven't tested all permutations.17:38
nacc_i don't have a 16.04 in front of me, but systemd-resolve --status, should be reporting a per-link entry, i thought17:40
cyphermoxtekgeek1205: mason: teward: "dns-nameserver" only works if you have resolvconf installed17:42
cyphermoxwe also don't install that by defualt, because resolvconf and systemd-resolved both want to be authoritative for nameserver info17:42
tewardcyphermox: so if you're using ifupdown with systemd-resolved how do you pass it the DNS servers to query via ifupdown configs?17:43
tewardor would that be a manual step called by `up` in the config?17:43
nacc_cyphermox: thanks for that info!17:43
cyphermoxteward: nothing you write in ifupdown is pased to systemd-resolved.17:43
cyphermoxor anywhere in systemd for that matter17:43
cyphermoxif you use ifupdown, you need to write resolv.conf yourself, or add your DNS in /etc/systemd/resolved.conf (the DNS= line)17:44
tewardtekgeek1205: ^17:48
tewardcyphermox: is this documented anywhere?17:48
masoncyphermox: Ah, I must be thinking of my "funny upgrade" I did last week then.17:48
tekgeek1205waiting on a reboot then, i'll try changeing /etc/systemd/resolved.conf17:49
tekgeek1205Thankyou guys!!! DNS is working! This is my first big projest with linux.I was about to give up and all back to linux brides' untill i could get a 2nd 10gb uplink card for my switches. Im still a bit green in the linux world.17:56
masontekgeek1205: \o/17:58
tekgeek1205now I can go on my merry way setting up containers and vm's. 10gb from my workstation to my server has been a dream for years!!!! Time to put that ZFS array to work!18:05
compdoczfs?! oh no!18:10
odcno?18:11
compdocjk :)18:11
odcah :)18:11
tekgeek1205its also the root FS......that was fun18:20
DammitJimdo you guys have any recommendations of what is normally used as a "file server"19:41
DammitJimlike when one of your users logs on to a server via ssh19:42
DammitJimthe current directory where they land is actually a mount to a different server19:42
DammitJimis that normally done with just a samba server? any other more popular options?19:42
jellyfrom unix to unix I'd just use sftp19:44
jellyand let them connect to the system hosting actual files, no network filesystem use19:44
DammitJimoh, I meant like if I ssh into serverA which is an application server19:45
DammitJimI am taken to a directory that is actually part of a mount to serverB19:45
jellyoh, do you want users to have a $HOME on a shared file server19:45
DammitJim+119:45
DammitJimI've seen that done in the past19:46
DammitJimand I'm curious as to how that is normally done (especially if the backend file server is redundant)19:46
jellywe don't do that at all.  NFS v4 supposedly has all sorts of nifty features for that, including clustered nfs, but i have no idea which features actually work well and are reliable19:47
DammitJimthanks jelly19:47
DammitJimso, related to this... are there any recommended file server clusters?19:48
DammitJimin our company, many of the M2M processes just move files around19:48
DammitJimI'm looking to find a way to store those files on some kind of redundant system in case that I have to do maintenance or upgrades to that resource19:48
UssatSo....we DO do that with NFS v419:53
Ussatits not trivial to set up19:53
Ussatso what you want is a clustered FS, which, is also not trivial19:54
Ussatthere are a few ways to do it19:54
Ussatand none of them are simple19:55
tomreynthere are also those proprietory storage clusters which can export r/w via nfs19:55
Ussat^^19:55
jellyI bet netapp is simple!19:56
Ussatyup...those are the simplest, although more expensive19:56
Ussat...19:56
jellyglusterfs? ceph? do any of those work not horribly slow?19:57
Ussatthey are not made for speed19:57
Ussatthey are made (in theory) for resiliance19:57
tomreyni never used either, but would expect them to work out in this use case, since they are also used for storage backends in clouds20:01
tomreyn(so there must be ways to configure them to not be super slow)20:02
jellydon't sotrage backends in clouds just expose objects that are then used as blockdevs20:02
UssatThey can both be used in this case, but neither are trivial20:09
DammitJimI've heard of netapp20:11
DammitJimUssat, what would you say is the advantage of having a file system cluster?20:11
Ussatyes...netapp is a thing20:11
DammitJimyeah, I read about glusterfs and was going to try it out in a virtual lab20:12
UssatOK, first you need to differentiate between a cluster enabled FS and a clustered FS20:12
Ussatwhat do you want to do20:12
Ussatthe reason I ask, is I work with systems that need to have a VERY high avaliability20:13
DammitJimso, I'm looking at this from the perspective of: normally a server is the file server20:13
DammitJimwell, if I need to do maintenance on that server or it breaks, then all the applications will bork20:13
Ussatso you need a cluster20:13
DammitJimso, I thought... a cluster of servers would take care of that problem20:13
Ussatcorrect, it can20:13
DammitJimso, if serverA goes down, serverB will continue to service whatever the apps need20:14
Ussatbut that is different than a load spread FS20:14
Ussatcluster20:14
DammitJimwhat is the difference between enabled and the other option?20:14
Ussatyou want active <<----->> active20:14
Ussatand they will share a FS, so when one dies, it releases the lock on the FS and the other picks it up20:15
DammitJimoh ok, so this is not like serverA and serverB are constantly synchronizing data between them?20:15
UssatWell, you can have that, but its different20:15
DammitJimwho hosts the FS?20:15
DammitJim'cause then what happens if the FS server goes down?20:15
DammitJimbtw, I know some of my thinking sometimes will never happen20:16
UssatGenerally the FS is hosted on both20:16
Ussatand shared20:16
DammitJimso, just yell at me if I'm thinking the wrong way20:16
Odd_BlokeWhat's driving the need for the sharing to happen at the filesystem level?20:17
xrandrDammitJim: make sure you rebalance the filesystem often20:17
DammitJimfile processing and hosting20:17
xrandrthe glusterfs20:17
DammitJimrebalance? oh gosh20:17
Ussatso there are a few ways....what WE do, is we have a HUGE isilon that is the FS, which is replicatedbetween datacenters20:17
xrandrDammitJim: it does it for you, there's a command you can use. gluster volume rebalance <VOL> start20:17
xrandrDammitJim: I am very fond of gluster :(20:18
xrandr:) *20:18
Odd_BlokeYour needs might be better met by using an object store for the files, where you have an API that you use to push and pull files.20:18
DammitJim:) or :( ?20:18
Ussat^^^20:18
xrandrDammitJim:  :)20:18
Odd_BlokeBecause then you just load-balance the service in the usual way you would load-balance an HTTP service.20:18
Ussatyup20:18
DammitJimxrandr, and you have gluster clients that mount those resources?20:18
Ussatwe have VERY different needs20:18
xrandrDammitJim: yes.20:18
DammitJimUssat, I appreciate you sharing what YOU do20:19
UssatI work at a hospital where shit has to always be avaliable20:19
xrandrDammitJim: I use it for my new business. I now have 8.1 TB between 3 servers20:19
Ussatours is a multi million dollar setup20:19
DammitJimmulti milllion in infrastructure?20:19
DammitJimyou sound like Orlando Health20:19
DammitJimxrandr, how much storage do you have on each server?20:20
UssatNo, not in Orlando. We are a major research hospital/University20:20
DammitJimbtw, isn't it weird how many layers we have to take into account for "reliability?"20:20
xrandrDammitJim: If you're going to use gluster, you need to determine the setup you need. Do you want file replication (mirrored drives), or one huge drive?20:20
DammitJimoh yeah, US SAT20:20
DammitJimreplication in my case20:20
xrandrDammitJim: I have two servers with 1.4TB each, and another server with a 6T drive dedicated to gluster20:20
Ussatok, replication is pretty simple20:21
DammitJimso, glusterfs is pretty good? I can deal with the "slowness"20:21
Ussatit can be tuned20:21
xrandrDammitJim: there's ways to speed that up. Network compression, etc.20:21
Ussatyup20:21
xrandrDammitJim: also, depending on your needs, I would recommend using the ssl option with it. If your data is sensitive, let it be encrypted20:22
Ussatwe also have to deal with encryption in flight and at rest20:22
DammitJimyup, I'll need to do encryption20:23
xrandrUssat: doesn't the SSL option for the volume handle that? or is that just transmission between the servers?20:23
Ussatxrandr, it might, but we use special encryption accelerator cards20:24
xrandrDammitJim: I'm gonna go out on a limb here and say you know all your server's specs, right?20:24
DammitJimxrandr, so, I should probably just test this in a lab, then20:24
DammitJimno, I can't say I know the server specs20:24
xrandrDammitJim: absolutely!20:25
Ussatxrandr, we are talking almost a petabyte of data :)20:25
xrandrUssat: Which filesystem do you use at the server level? ext4 or xfs?20:25
DammitJimand the cluster only needs to be something like 1TB20:25
DammitJimand we'll probably only use about 1/320:25
xrandrDammitJim: I didn't have a lab to test it on, so I just went live with it and worked it as I went20:26
UssatThe FS is on this:  https://www.ibm.com/us-en/marketplace/flash-storage-virtualization20:26
DammitJimbut we know we are growing and that should be able to keep us working for about 2 years20:26
Ussatand its XFS20:26
DammitJimthat looks pretty20:26
DammitJimthat's nice that the 1st thing they say is "Save money"20:27
Ussatwe have one in each data center, maxxed out20:27
Ussatits sales, of course they do :)20:27
DammitJim'cause you are saving money, but it's more like buying insurance20:27
xrandrDammitJim: There's #gluster if ya need anything :)20:27
DammitJimthanks xrandr20:27
xrandrand if I'm aroumd I'd be happy to answer any questions20:27
DammitJimagain, xrandr you have to install gluster clients on the other servers that want to mount the file system, right?20:27
DammitJimthanks20:27
Ussatglusterfs will do what you want, the Uni proper uses that20:28
xrandrDammitJim: no, you need to install gluster server on every server you want to add to the gluster volume20:28
xrandrsorry i misread that20:28
xrandrGluster client on any machine that wants to mount the gluster volume.   Gluster server for any server that wants to contribute to the volume20:29
DammitJimyup, got it20:29
DammitJimdo the gluster servers have to be the same for replication?20:29
Ussatsame, meaning ?20:29
DammitJimhow do you address the cluster? floating IP?20:29
xrandrYou'll have one master server, and a bunch of slaves20:29
DammitJimsame like... same storage, same RAM, same CPUs20:29
xrandrthe one server is the initial volume20:30
xrandrthen you add another to it from the first server, and so on20:30
UssatDammitJim, well, they should of course20:30
xrandrDammitJim: They should, but it's not necessry20:30
DammitJimoh yeah 'cause xrandr is not doing replication (6TB + 1T + 1T)20:30
xrandrbut you might hit some performance issues20:30
xrandrDammitJim: my biggest suggestion is to make sure they are all of the same network speed20:31
xrandrDon't have 100MB cards in some, and 1GB cards in others20:31
DammitJimso, like 1GB and stuff20:31
xrandrfor some reason, gluster gets unhappy with that.20:31
DammitJimgot it20:31
xrandrOr at least it did with me20:32
xrandrDammitJim: what version of Ubuntu server are you running?20:32
xrandrUssat: how big is your Gluster volume?20:34
* xrandr is curious20:34
DammitJimright now? please don't ask20:35
DammitJimI have a ton of 14.0420:35
DammitJimbut I would probably set this up with 18.0420:35
xrandrGood choice.  Do a little research... the packages that are bundled with ubuntu-server for gluser are apparently EOL. There's repos out ther with the 4.X which are still in production phase and are supported20:37
DammitJimoh really?20:37
DammitJimman, one can't win, huh? everything is EOL these days but that's probably because we haven't been able to keep up with our processes20:38
xrandrDammitJim: yeah. I also have a bone to grind with the folks who wrote the gluster documentation.  When they stated it should be backwards compatible, they really needed to specify between which versions20:38
xrandri had a 3.5 and a 3.12 i think20:38
xrandrThey did not like each other20:39
DammitJimoh yeah, one would think that one can upgrade 1 node and then the other20:39
DammitJimI"m sure there are some limitations20:39
xrandrOh there are. You can only have a gap of about 3 or 4 versions20:39
xrandrDammitJim: and you can upgrade one node then the other. Just make sure you keep up with it.20:40
xrandrso that the versions don't fall too far apart where you can't do that20:40
DammitJimI'm actually in a pickle with rabbitmq and erlang because I let it go too long20:40
xrandrDammitJim: you also need to figure out which communication protocol you want to use between the servers (bricks).   There's TCP, UDP, and RDMA.  I use TCP20:41
xrandrReally read through the gluster docs and figure out how you want to set things up20:43
xrandrsome things are changeable, some things are not20:44
DammitJimdo you have the servers directly connected or going through a switch?20:45
xrandrDammitJim: 2 are in the same datacenter, 1 is in another20:45
xrandrso 2 connected directly to each other via crossover cable, and another is on a 10GB connection20:45
DammitJimI think in my case actually, they are going to be in the same Cisco UCS20:45
xrandrUCS?20:46
DammitJimCisco Unified computing20:47
xrandrIs that Cisco20:47
DammitJimsi20:47
xrandrversion of Amazons cloud computing?20:47
DammitJimno20:47
xrandr( hit enter instead of '  )  lol20:48
DammitJimthese are blade servers basically20:48
jellyUCS is cisco's "hyperconverged" hardware server platform20:48
xrandrjelly: hmm. I've not heard of that before... gonna have to do some researching20:48
jellyI think it's existed for like 10 years now, they're up to what, 3rd-4th generation?20:49
DammitJim4th20:49
DammitJimmaybe 5th (my system is 2 years old)20:49
DammitJimSANs backend and VMware stuff20:50
xrandrjelly: i am clearly out of the loop on that stuff :)   A guy I used to work with several years ago put a bad taste in my mouth as far as Cisco went20:50
DammitJimCisco is a royal pain in the bottom20:50
xrandrDammitJim: your restraint is admirable20:50
DammitJima company that I can't say enough of... is Nimble Storage, even though now they are HP20:50
jellythese seem to be just x86 brand name blades with some interesting features20:50
jellyHPE bought Nimble?20:51
DammitJimyeah... beginning of the year...20:52
UssatUCS's are fucking GREAT21:00
DammitJimyou think so Ussat ?21:00
Ussatwe have a few 4th gen21:00
Ussatoh yea21:00
Ussatall my VM;s are on them21:01
DammitJimdo you have the luxury of running  a vcenter with dozens of ESXi hosts?21:01
UssatWell.....actually. I just run the *nix side of things, but yes21:03
UssatDammitJim, again, what we do is a pretty sizeable enterprise21:03
DammitJimcool21:03
DammitJimman, what a pain this erlang stuff is21:04
UssatDammitJim, we have a little over a petabyte in storeage right now21:04
Ussatand we need to keep stuff ...forever almost it seems21:04
DammitJimforever is like a looong time21:05
Ussatyes21:05
Ussatmedical records need to be kept a long time21:05
DammitJimhey guys, how do I perform an upgrade of erlang from version 20.1-1 to just 20.3-1 even though the candidate is 21.0-121:05
nacc_DammitJim: is 20.3-1 available via `apt-cache policy` ?21:06
DammitJimyes21:07
nacc_and none of those numbers match ubuntu package versions21:07
nacc_DammitJim: pastebin it? but you can do `sudo apt-get install erlang=<version>`21:07
DammitJimthat's it, nacc_ !!!t21:08
DammitJimthanks!@21:08
DammitJimoh gosh, I didn't realize there are a bunch of erlang packages like erlang-ic, erlang-gs that I don't want to automatically upgrade to 21, but want to keep it at 20.321:09
DammitJimor how do I tell ubuntu that when I do an: sudo apt-get dist-upgrade21:11
DammitJimI don't want erlang packages to be upgraded to 21.0, but to 20.321:11
DammitJimsomething opened the floodgates to recommend that candidate which in my case is not compatible with rabbitmq21:12
DammitJimugh, gotta run21:12
DammitJimsee you guys21:12
nacc_!pinning | DammitJim21:12
ubottuDammitJim: pinning is an advanced feature that APT can use to prefer particular packages over others. See https://help.ubuntu.com/community/PinningHowto21:12
Epx99818.10 beta is out right?21:24
powersjEpx998, beta is not out for Cosmic https://wiki.ubuntu.com/CosmicCuttlefish/ReleaseSchedule21:26
Epx998http://cdimage.ubuntu.com/daily-live/current/ isnt a beta?21:29
tomreynas the url indicates, it's a daily build21:33
tomreyni.e. potentially broken, unstable, pre-release21:34
tomreyn(the download web page also says "daily build")21:35
geniiBeta freeze isn't until Sept 27th anyhow for 18.1021:37
genii( according to https://wiki.ubuntu.com/CosmicCuttlefish/ReleaseSchedule )21:38
Epx998yeah thats ok, some dev want to test some gpu stuff on it21:59
Epx998thanks for the info21:59

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!