[00:30] <smoser> Jimmy06: did you try running again ?
[00:30] <smoser> there a good shot it will work after a reboot.
[02:37] <ansibleesxi666> Hello all, i need to build a linux server for build purpose , i have some issue with Ubuntu 16 "server" ISO -  boot with UEFI on HP DL 360  Gen 10 box with RAID 0 ssd disks-   Ubuntu 16 "desktop"cd works well , so does it matter much if i use server or desktop?
[02:42] <tomreyn> ansibleesxi666: you can install desktop and convert it to a server installation, but i wuld recommend installing the server installation instead (since converting isn't that easy). by the way, you are probably referring to ubuntu "16.04 LTS", there is no "ubuntu 16"
[02:43] <tomreyn> ansibleesxi666: what is the error message you run into when you try to install using the server installer?
[02:43] <leftyfb> tomreyn: all of this has been discussed in #ubuntu where they asked this same exact question
[02:43] <tomreyn> oh, maybe ansibleesxi666 should refer to those answers then
[02:45] <ansibleesxi666> yes 16.04.05 is what i am using , i see a strange issue, i have 3 SSD disk on box, each with 1 TB, we want to use raid 0 for perf, when i boot box with ISO of server  the OS install shows disk at 600 GB, when i try desktop iso it shows correct size of 3 TB
[02:46] <ansibleesxi666> disk as 600 GB i mean
[02:46] <ansibleesxi666> as i was in rush i used desktop ISO & build them for now
[02:48] <ansibleesxi666> i did google as it says both desktop & server use the same Kernel so i went ahead with Desktop
[02:49] <tomreyn> ansibleesxi666: so did you ask these same questions in #ubuntu before?
[02:51] <ansibleesxi666> i did , i did not got answer so i am here
[02:52] <tomreyn> leftyfb seems to suggest it was diuscussed there before, maybe you missed some reponses (left early)?
[02:52] <tomreyn> !irclogs
[02:55] <tomreyn> if you really got no replies (please point me to when it was previously dsicussed) i'll be happy to go over it with you again.
[03:13] <ansibleesxi666> let me check
[03:13] <ansibleesxi666> No i did not got respond what i was looking for
[03:14] <ansibleesxi666> the last 2 conversation was :-
[03:14] <ansibleesxi666> (7:41:48 PM) leftyfb: ansibleesxi666: when one of those drives goes bad, you will lose all your data
[03:14] <ansibleesxi666> (7:42:33 PM) dxb left the room (quit: Ping timeout: 245 seconds).
[03:14] <ansibleesxi666> (7:43:26 PM) ansibleesxi666: our build team wants less time & they do not care of the data on the build box as the actual build goes in central git repo.... these build nodes are purly for compuete ... but starnage issue is why server iso shows disk size as 600 GB & not 3 TB
[03:14] <ansibleesxi666> brb
[03:30] <ansibleesxi666> i did more google & i think i have work-around  ie in my case a desktop iso or server will not impact much  as the core kernel is same
[03:31] <ansibleesxi666> thanks for your time
[05:05] <cpaelzer> good morning
[06:03] <lordievader> Good morning
[07:38] <Jimmy06> smoser: I tried more then 10 times with the same config
[09:58] <muhaha> Has anyone experience with on-prem landscape ?
[11:17] <waveform> muhaha, what's up?
[11:17] <muhaha> @waveform did you try to dockerize this big thing ?
[11:18] <waveform> muhaha, there's juju charms for it but I don't think there's any (official) docker images
[11:18] <blackflow> isn't containerizing it.... contrary to its purpose?
[11:19] <muhaha> Why?
[11:19] <waveform> not necessarily - containers don't *have* to be many to a machine
[11:20] <waveform> for instance, the juju charms are pretty flexible - they typically set up one machine for haproxy, another for pg, etc. etc. - now those might be "real" machines, or they could just be containers
[11:20] <muhaha> I will try to dockerize it, but I will have to understand to this landscape thingy..
[11:21] <muhaha> I dont understand how to start this..  For example -quickstart package is useless for this. I will need to use landscape-server
[11:22] <waveform> you're probably better off looking through what the quickstart script actually does - but trying it out on juju will give you a better idea of what a production setup really looks like
[11:22] <waveform> (we don't recommend quickstart for scalable production deployments)
[11:23] <blackflow> muhaha: in that it has to control the entire machine and dockerizing it is isolating it? or am I misunderstanding something here
[11:28] <muhaha> I dont need to control host machine
[11:29] <muhaha> I need to control other ones...
[11:29] <muhaha> That is why I need some gateway... I does not matter if its running on bare-metal or in container
[11:30] <blackflow> oh, I wasn't aware you could install the server on premises. I thought it was strictly SaaS.
[11:34] <muhaha> unfortunatelly there is no alternative for ubuntu (opensource) to manage other servers :(  So I will have to use landscape. Foreman can not handle this afaik
[11:35] <muhaha> *selfhosted
[11:41] <blackflow> muhaha: there's always salt stack
[11:47] <muhaha> of course there are also ansible and chef
[11:49] <blackflow> ansible is extremely too slow crap. good for simple setups but as you scale up in complexity of config, it's way way way too slow.
[11:50] <blackflow> but the point being, if you want analogous to landscape then saltstack and chef (and puppet) is more appropriate because of the client-server model and ability for clients to trigger events between them.
[14:33] <rharper> Jimmy06: are you running the 18.04.1 release of the installer?
[17:12] <tekgeek1205> So I have a 10G fiber connection that im trying to use for the physical connection to an OpenvSwitch. I need the host to have access to the same interface to serve as a NAS on that port. Im having problems with assigning a static ip address. DHCP works fine but isn't an option, the DHCP server will be a pfSense KVM. I've tried both ports in DHCP and they work fine. I tried both with
[17:12] <tekgeek1205> static configs and I can't get a DNS. I can still ping 8.8.8.8 but not google.com. Im sure its just a simple mistake caused by my lack of knowledge. Here is my interfaces file: https://pastebin.com/5v9YvE1p
[17:24] <nacc_> tekgeek1205: what version of ubuntu?
[17:24] <tekgeek1205> 18.04
[17:24] <tekgeek1205> server, i reverted from netplan back to ifupdown
[17:28] <nacc_> tekgeek1205: you can talk to `systemd-resolve` to see how it's resolving hostnames?
[17:28] <nacc_> tekgeek1205: what is in your /etc/resolv.conf?
[17:29] <tekgeek1205> checking
[17:30] <tekgeek1205> nameserver 127.0.0.53....... so its not getting a DNS?
[17:30] <nacc_> tekgeek1205: that's systemd-resolve
[17:30] <nacc_> *systemd-resolved
[17:30] <nacc_> tekgeek1205: so you need to ask `systemd-resolve --status` what it is using
[17:31] <mason> Ah, I was curious how to get systemd-resolved to spit that out.
[17:31] <tekgeek1205> do you want that in past in?
[17:31] <nacc_> tekgeek1205: yeah, that's probably useful
[17:31] <tekgeek1205> the first was the resolve.conf
[17:32] <nacc_> tekgeek1205: or you can just read it, to see what for that iface is listed as 'DNS Servers'
[17:32] <mason> Not sure why they don't populate and maintain a text comment in /etc/resolv.conf since that's where people are going to look. Or even just a comment in that file telling people how to dig out the relevant status.
[17:32] <nacc_> mason: i mostly agree with you :)
[17:32] <mason> nacc_: I skipped the obvious first pick, "don't do that".
[17:33] <tekgeek1205> https://pastebin.com/1QUQReM7
[17:34] <teward> tekgeek1205: is that *all* you are getting?
[17:34] <teward> there's usually other lines than just that
[17:34] <tekgeek1205> yeah with a fresh boot and a static address....
[17:34] <teward> > static address
[17:34] <teward> did you set DNS record data in your netplan config?
[17:35] <teward> and if so, what is it?
[17:35] <tekgeek1205> no im using ifupdown, netplan and openvswitch are incompatible
[17:35] <teward> then did you set dns-nameservers in ifupdown?
[17:36] <tekgeek1205> yeah....https://pastebin.com/5v9YvE1p
[17:36] <tekgeek1205> thats, my interfaces file
[17:36] <mason> dns-nameservers in ifupdown don't negate systemd-resolved jumping in
[17:36] <mason> My hope is that it uses the interfaces information, but I'm not sure
[17:37] <teward> it might not be doing that properly, that's a systemd headache though.
[17:37] <teward> you can force your system to use the other DNS resolvers, but you'd have to fuss around with some ResolveD config files to do it
[17:37] <mason> For my part, I found that purging resolvconf helped.
[17:38] <mason> I haven't tested all permutations.
[17:40] <nacc_> i don't have a 16.04 in front of me, but systemd-resolve --status, should be reporting a per-link entry, i thought
[17:42] <cyphermox> tekgeek1205: mason: teward: "dns-nameserver" only works if you have resolvconf installed
[17:42] <cyphermox> we also don't install that by defualt, because resolvconf and systemd-resolved both want to be authoritative for nameserver info
[17:43] <teward> cyphermox: so if you're using ifupdown with systemd-resolved how do you pass it the DNS servers to query via ifupdown configs?
[17:43] <teward> or would that be a manual step called by `up` in the config?
[17:43] <nacc_> cyphermox: thanks for that info!
[17:43] <cyphermox> teward: nothing you write in ifupdown is pased to systemd-resolved.
[17:43] <cyphermox> or anywhere in systemd for that matter
[17:44] <cyphermox> if you use ifupdown, you need to write resolv.conf yourself, or add your DNS in /etc/systemd/resolved.conf (the DNS= line)
[17:48] <teward> tekgeek1205: ^
[17:48] <teward> cyphermox: is this documented anywhere?
[17:48] <mason> cyphermox: Ah, I must be thinking of my "funny upgrade" I did last week then.
[17:49] <tekgeek1205> waiting on a reboot then, i'll try changeing /etc/systemd/resolved.conf
[17:56] <tekgeek1205> Thankyou guys!!! DNS is working! This is my first big projest with linux.I was about to give up and all back to linux brides' untill i could get a 2nd 10gb uplink card for my switches. Im still a bit green in the linux world.
[17:58] <mason> tekgeek1205: \o/
[18:05] <tekgeek1205> now I can go on my merry way setting up containers and vm's. 10gb from my workstation to my server has been a dream for years!!!! Time to put that ZFS array to work!
[18:10] <compdoc> zfs?! oh no!
[18:11] <odc> no?
[18:11] <compdoc> jk :)
[18:11] <odc> ah :)
[18:20] <tekgeek1205> its also the root FS......that was fun
[19:41] <DammitJim> do you guys have any recommendations of what is normally used as a "file server"
[19:42] <DammitJim> like when one of your users logs on to a server via ssh
[19:42] <DammitJim> the current directory where they land is actually a mount to a different server
[19:42] <DammitJim> is that normally done with just a samba server? any other more popular options?
[19:44] <jelly> from unix to unix I'd just use sftp
[19:44] <jelly> and let them connect to the system hosting actual files, no network filesystem use
[19:45] <DammitJim> oh, I meant like if I ssh into serverA which is an application server
[19:45] <DammitJim> I am taken to a directory that is actually part of a mount to serverB
[19:45] <jelly> oh, do you want users to have a $HOME on a shared file server
[19:45] <DammitJim> +1
[19:46] <DammitJim> I've seen that done in the past
[19:46] <DammitJim> and I'm curious as to how that is normally done (especially if the backend file server is redundant)
[19:47] <jelly> we don't do that at all.  NFS v4 supposedly has all sorts of nifty features for that, including clustered nfs, but i have no idea which features actually work well and are reliable
[19:47] <DammitJim> thanks jelly
[19:48] <DammitJim> so, related to this... are there any recommended file server clusters?
[19:48] <DammitJim> in our company, many of the M2M processes just move files around
[19:48] <DammitJim> I'm looking to find a way to store those files on some kind of redundant system in case that I have to do maintenance or upgrades to that resource
[19:53] <Ussat> So....we DO do that with NFS v4
[19:53] <Ussat> its not trivial to set up
[19:54] <Ussat> so what you want is a clustered FS, which, is also not trivial
[19:54] <Ussat> there are a few ways to do it
[19:55] <Ussat> and none of them are simple
[19:55] <tomreyn> there are also those proprietory storage clusters which can export r/w via nfs
[19:55] <Ussat> ^^
[19:56] <jelly> I bet netapp is simple!
[19:56] <Ussat> yup...those are the simplest, although more expensive
[19:56] <Ussat> ...
[19:57] <jelly> glusterfs? ceph? do any of those work not horribly slow?
[19:57] <Ussat> they are not made for speed
[19:57] <Ussat> they are made (in theory) for resiliance
[20:01] <tomreyn> i never used either, but would expect them to work out in this use case, since they are also used for storage backends in clouds
[20:02] <tomreyn> (so there must be ways to configure them to not be super slow)
[20:02] <jelly> don't sotrage backends in clouds just expose objects that are then used as blockdevs
[20:09] <Ussat> They can both be used in this case, but neither are trivial
[20:11] <DammitJim> I've heard of netapp
[20:11] <DammitJim> Ussat, what would you say is the advantage of having a file system cluster?
[20:11] <Ussat> yes...netapp is a thing
[20:12] <DammitJim> yeah, I read about glusterfs and was going to try it out in a virtual lab
[20:12] <Ussat> OK, first you need to differentiate between a cluster enabled FS and a clustered FS
[20:12] <Ussat> what do you want to do
[20:13] <Ussat> the reason I ask, is I work with systems that need to have a VERY high avaliability
[20:13] <DammitJim> so, I'm looking at this from the perspective of: normally a server is the file server
[20:13] <DammitJim> well, if I need to do maintenance on that server or it breaks, then all the applications will bork
[20:13] <Ussat> so you need a cluster
[20:13] <DammitJim> so, I thought... a cluster of servers would take care of that problem
[20:13] <Ussat> correct, it can
[20:14] <DammitJim> so, if serverA goes down, serverB will continue to service whatever the apps need
[20:14] <Ussat> but that is different than a load spread FS
[20:14] <Ussat> cluster
[20:14] <DammitJim> what is the difference between enabled and the other option?
[20:14] <Ussat> you want active <<----->> active
[20:15] <Ussat> and they will share a FS, so when one dies, it releases the lock on the FS and the other picks it up
[20:15] <DammitJim> oh ok, so this is not like serverA and serverB are constantly synchronizing data between them?
[20:15] <Ussat> Well, you can have that, but its different
[20:15] <DammitJim> who hosts the FS?
[20:15] <DammitJim> 'cause then what happens if the FS server goes down?
[20:16] <DammitJim> btw, I know some of my thinking sometimes will never happen
[20:16] <Ussat> Generally the FS is hosted on both
[20:16] <Ussat> and shared
[20:16] <DammitJim> so, just yell at me if I'm thinking the wrong way
[20:17] <Odd_Bloke> What's driving the need for the sharing to happen at the filesystem level?
[20:17] <xrandr> DammitJim: make sure you rebalance the filesystem often
[20:17] <DammitJim> file processing and hosting
[20:17] <xrandr> the glusterfs
[20:17] <DammitJim> rebalance? oh gosh
[20:17] <Ussat> so there are a few ways....what WE do, is we have a HUGE isilon that is the FS, which is replicatedbetween datacenters
[20:17] <xrandr> DammitJim: it does it for you, there's a command you can use. gluster volume rebalance <VOL> start
[20:18] <xrandr> DammitJim: I am very fond of gluster :(
[20:18] <xrandr> :) *
[20:18] <Odd_Bloke> Your needs might be better met by using an object store for the files, where you have an API that you use to push and pull files.
[20:18] <DammitJim> :) or :( ?
[20:18] <Ussat> ^^^
[20:18] <xrandr> DammitJim:  :)
[20:18] <Odd_Bloke> Because then you just load-balance the service in the usual way you would load-balance an HTTP service.
[20:18] <Ussat> yup
[20:18] <DammitJim> xrandr, and you have gluster clients that mount those resources?
[20:18] <Ussat> we have VERY different needs
[20:18] <xrandr> DammitJim: yes.
[20:19] <DammitJim> Ussat, I appreciate you sharing what YOU do
[20:19] <Ussat> I work at a hospital where shit has to always be avaliable
[20:19] <xrandr> DammitJim: I use it for my new business. I now have 8.1 TB between 3 servers
[20:19] <Ussat> ours is a multi million dollar setup
[20:19] <DammitJim> multi milllion in infrastructure?
[20:19] <DammitJim> you sound like Orlando Health
[20:20] <DammitJim> xrandr, how much storage do you have on each server?
[20:20] <Ussat> No, not in Orlando. We are a major research hospital/University
[20:20] <DammitJim> btw, isn't it weird how many layers we have to take into account for "reliability?"
[20:20] <xrandr> DammitJim: If you're going to use gluster, you need to determine the setup you need. Do you want file replication (mirrored drives), or one huge drive?
[20:20] <DammitJim> oh yeah, US SAT
[20:20] <DammitJim> replication in my case
[20:20] <xrandr> DammitJim: I have two servers with 1.4TB each, and another server with a 6T drive dedicated to gluster
[20:21] <Ussat> ok, replication is pretty simple
[20:21] <DammitJim> so, glusterfs is pretty good? I can deal with the "slowness"
[20:21] <Ussat> it can be tuned
[20:21] <xrandr> DammitJim: there's ways to speed that up. Network compression, etc.
[20:21] <Ussat> yup
[20:22] <xrandr> DammitJim: also, depending on your needs, I would recommend using the ssl option with it. If your data is sensitive, let it be encrypted
[20:22] <Ussat> we also have to deal with encryption in flight and at rest
[20:23] <DammitJim> yup, I'll need to do encryption
[20:23] <xrandr> Ussat: doesn't the SSL option for the volume handle that? or is that just transmission between the servers?
[20:24] <Ussat> xrandr, it might, but we use special encryption accelerator cards
[20:24] <xrandr> DammitJim: I'm gonna go out on a limb here and say you know all your server's specs, right?
[20:24] <DammitJim> xrandr, so, I should probably just test this in a lab, then
[20:24] <DammitJim> no, I can't say I know the server specs
[20:25] <xrandr> DammitJim: absolutely!
[20:25] <Ussat> xrandr, we are talking almost a petabyte of data :)
[20:25] <xrandr> Ussat: Which filesystem do you use at the server level? ext4 or xfs?
[20:25] <DammitJim> and the cluster only needs to be something like 1TB
[20:25] <DammitJim> and we'll probably only use about 1/3
[20:26] <xrandr> DammitJim: I didn't have a lab to test it on, so I just went live with it and worked it as I went
[20:26] <Ussat> The FS is on this:  https://www.ibm.com/us-en/marketplace/flash-storage-virtualization
[20:26] <DammitJim> but we know we are growing and that should be able to keep us working for about 2 years
[20:26] <Ussat> and its XFS
[20:26] <DammitJim> that looks pretty
[20:27] <DammitJim> that's nice that the 1st thing they say is "Save money"
[20:27] <Ussat> we have one in each data center, maxxed out
[20:27] <Ussat> its sales, of course they do :)
[20:27] <DammitJim> 'cause you are saving money, but it's more like buying insurance
[20:27] <xrandr> DammitJim: There's #gluster if ya need anything :)
[20:27] <DammitJim> thanks xrandr
[20:27] <xrandr> and if I'm aroumd I'd be happy to answer any questions
[20:27] <DammitJim> again, xrandr you have to install gluster clients on the other servers that want to mount the file system, right?
[20:27] <DammitJim> thanks
[20:28] <Ussat> glusterfs will do what you want, the Uni proper uses that
[20:28] <xrandr> DammitJim: no, you need to install gluster server on every server you want to add to the gluster volume
[20:28] <xrandr> sorry i misread that
[20:29] <xrandr> Gluster client on any machine that wants to mount the gluster volume.   Gluster server for any server that wants to contribute to the volume
[20:29] <DammitJim> yup, got it
[20:29] <DammitJim> do the gluster servers have to be the same for replication?
[20:29] <Ussat> same, meaning ?
[20:29] <DammitJim> how do you address the cluster? floating IP?
[20:29] <xrandr> You'll have one master server, and a bunch of slaves
[20:29] <DammitJim> same like... same storage, same RAM, same CPUs
[20:30] <xrandr> the one server is the initial volume
[20:30] <xrandr> then you add another to it from the first server, and so on
[20:30] <Ussat> DammitJim, well, they should of course
[20:30] <xrandr> DammitJim: They should, but it's not necessry
[20:30] <DammitJim> oh yeah 'cause xrandr is not doing replication (6TB + 1T + 1T)
[20:30] <xrandr> but you might hit some performance issues
[20:31] <xrandr> DammitJim: my biggest suggestion is to make sure they are all of the same network speed
[20:31] <xrandr> Don't have 100MB cards in some, and 1GB cards in others
[20:31] <DammitJim> so, like 1GB and stuff
[20:31] <xrandr> for some reason, gluster gets unhappy with that.
[20:31] <DammitJim> got it
[20:32] <xrandr> Or at least it did with me
[20:32] <xrandr> DammitJim: what version of Ubuntu server are you running?
[20:34] <xrandr> Ussat: how big is your Gluster volume?
[20:34]  * xrandr is curious
[20:35] <DammitJim> right now? please don't ask
[20:35] <DammitJim> I have a ton of 14.04
[20:35] <DammitJim> but I would probably set this up with 18.04
[20:37] <xrandr> Good choice.  Do a little research... the packages that are bundled with ubuntu-server for gluser are apparently EOL. There's repos out ther with the 4.X which are still in production phase and are supported
[20:37] <DammitJim> oh really?
[20:38] <DammitJim> man, one can't win, huh? everything is EOL these days but that's probably because we haven't been able to keep up with our processes
[20:38] <xrandr> DammitJim: yeah. I also have a bone to grind with the folks who wrote the gluster documentation.  When they stated it should be backwards compatible, they really needed to specify between which versions
[20:38] <xrandr> i had a 3.5 and a 3.12 i think
[20:39] <xrandr> They did not like each other
[20:39] <DammitJim> oh yeah, one would think that one can upgrade 1 node and then the other
[20:39] <DammitJim> I"m sure there are some limitations
[20:39] <xrandr> Oh there are. You can only have a gap of about 3 or 4 versions
[20:40] <xrandr> DammitJim: and you can upgrade one node then the other. Just make sure you keep up with it.
[20:40] <xrandr> so that the versions don't fall too far apart where you can't do that
[20:40] <DammitJim> I'm actually in a pickle with rabbitmq and erlang because I let it go too long
[20:41] <xrandr> DammitJim: you also need to figure out which communication protocol you want to use between the servers (bricks).   There's TCP, UDP, and RDMA.  I use TCP
[20:43] <xrandr> Really read through the gluster docs and figure out how you want to set things up
[20:44] <xrandr> some things are changeable, some things are not
[20:45] <DammitJim> do you have the servers directly connected or going through a switch?
[20:45] <xrandr> DammitJim: 2 are in the same datacenter, 1 is in another
[20:45] <xrandr> so 2 connected directly to each other via crossover cable, and another is on a 10GB connection
[20:45] <DammitJim> I think in my case actually, they are going to be in the same Cisco UCS
[20:46] <xrandr> UCS?
[20:47] <DammitJim> Cisco Unified computing
[20:47] <xrandr> Is that Cisco
[20:47] <DammitJim> si
[20:47] <xrandr> version of Amazons cloud computing?
[20:47] <DammitJim> no
[20:48] <xrandr> ( hit enter instead of '  )  lol
[20:48] <DammitJim> these are blade servers basically
[20:48] <jelly> UCS is cisco's "hyperconverged" hardware server platform
[20:48] <xrandr> jelly: hmm. I've not heard of that before... gonna have to do some researching
[20:49] <jelly> I think it's existed for like 10 years now, they're up to what, 3rd-4th generation?
[20:49] <DammitJim> 4th
[20:49] <DammitJim> maybe 5th (my system is 2 years old)
[20:50] <DammitJim> SANs backend and VMware stuff
[20:50] <xrandr> jelly: i am clearly out of the loop on that stuff :)   A guy I used to work with several years ago put a bad taste in my mouth as far as Cisco went
[20:50] <DammitJim> Cisco is a royal pain in the bottom
[20:50] <xrandr> DammitJim: your restraint is admirable
[20:50] <DammitJim> a company that I can't say enough of... is Nimble Storage, even though now they are HP
[20:50] <jelly> these seem to be just x86 brand name blades with some interesting features
[20:51] <jelly> HPE bought Nimble?
[20:52] <DammitJim> yeah... beginning of the year...
[21:00] <Ussat> UCS's are fucking GREAT
[21:00] <DammitJim> you think so Ussat ?
[21:00] <Ussat> we have a few 4th gen
[21:00] <Ussat> oh yea
[21:01] <Ussat> all my VM;s are on them
[21:01] <DammitJim> do you have the luxury of running  a vcenter with dozens of ESXi hosts?
[21:03] <Ussat> Well.....actually. I just run the *nix side of things, but yes
[21:03] <Ussat> DammitJim, again, what we do is a pretty sizeable enterprise
[21:03] <DammitJim> cool
[21:04] <DammitJim> man, what a pain this erlang stuff is
[21:04] <Ussat> DammitJim, we have a little over a petabyte in storeage right now
[21:04] <Ussat> and we need to keep stuff ...forever almost it seems
[21:05] <DammitJim> forever is like a looong time
[21:05] <Ussat> yes
[21:05] <Ussat> medical records need to be kept a long time
[21:05] <DammitJim> hey guys, how do I perform an upgrade of erlang from version 20.1-1 to just 20.3-1 even though the candidate is 21.0-1
[21:06] <nacc_> DammitJim: is 20.3-1 available via `apt-cache policy` ?
[21:07] <DammitJim> yes
[21:07] <nacc_> and none of those numbers match ubuntu package versions
[21:07] <nacc_> DammitJim: pastebin it? but you can do `sudo apt-get install erlang=<version>`
[21:08] <DammitJim> that's it, nacc_ !!!t
[21:08] <DammitJim> thanks!@
[21:09] <DammitJim> oh gosh, I didn't realize there are a bunch of erlang packages like erlang-ic, erlang-gs that I don't want to automatically upgrade to 21, but want to keep it at 20.3
[21:11] <DammitJim> or how do I tell ubuntu that when I do an: sudo apt-get dist-upgrade
[21:11] <DammitJim> I don't want erlang packages to be upgraded to 21.0, but to 20.3
[21:12] <DammitJim> something opened the floodgates to recommend that candidate which in my case is not compatible with rabbitmq
[21:12] <DammitJim> ugh, gotta run
[21:12] <DammitJim> see you guys
[21:12] <nacc_> !pinning | DammitJim
[21:24] <Epx998> 18.10 beta is out right?
[21:26] <powersj> Epx998, beta is not out for Cosmic https://wiki.ubuntu.com/CosmicCuttlefish/ReleaseSchedule
[21:29] <Epx998> http://cdimage.ubuntu.com/daily-live/current/ isnt a beta?
[21:33] <tomreyn> as the url indicates, it's a daily build
[21:34] <tomreyn> i.e. potentially broken, unstable, pre-release
[21:35] <tomreyn> (the download web page also says "daily build")
[21:37] <genii> Beta freeze isn't until Sept 27th anyhow for 18.10
[21:38] <genii> ( according to https://wiki.ubuntu.com/CosmicCuttlefish/ReleaseSchedule )
[21:59] <Epx998> yeah thats ok, some dev want to test some gpu stuff on it
[21:59] <Epx998> thanks for the info