I see a lot of talk about this and to be honest I’m not on-board with a lot of the opinion. My view is that it’s fine to deploy an Enterprise’s applications to multiple public cloud providers, but wholly contain each application it the hosting provider, don’t go down the road of failing over between providers.
My thinking is driven by a number of things:
- People cite not going all in on a specific provider in case their at the mercy of their pricing. Well, there is a marketplace at work here – Azure, AWS, Google etc, are all competing with each other. The market will ensure pricing remains competitive.
- The cloud provider may go down. Well yes that may happen but probably not as often as something critical goes pop in your on-premise environment and the cloud providers engineering and response teams seem to be pretty darn good about recovery, and of course the application does have an SLA accommodating it doesn’t it? Big Enterprises do exist wholly in one provider and survive through intelligently designing their architectures to survive such events.
- I want to move my solution from one provider to another easily. Ok but why not treat that as a project based on cost and technical ROI rather than as a push button magic happens solution. While it is possible to creates environment using abstractions e.g. Terraform instead of ARM templates and CloudFormation that’s just the infrastructure (and not even PaaS). What about the data moving, that’s a signficantly hard proposition so what you have to ask what is driving the requirement.
This type of thinking also leads to the design of lowest common denominator – reminding me of that argument being applied to the various database vendors back in the day. Instead of using the business advantage gained from using a Cloud provider you wind up forcing a mindset of having to implement a bunch of infrastructure engineering to give you a level of abstraction just in case you should decide to move. Why wouldn’t I want to use the cloud providers managed container services and focus on the apps where the value is rather than spending time and money implementing and integrating container services present is the various Operating Systems and available as VM’s/Instances.
I’m a fan of leaving the infrastructure stuff to the cloud provider as much as possible – thus I really like AWS Beanstalk, Azure App Services and the various container services i.e. where I don’t have to do much to make it just work.
I do appreciate that some organisations have constraints that means they really cannot achieve that level of abstraction but it often feels like that isn’t the case, merely that the organisational culture defaults to server-based cloud use because of familiarity, ease of on-boarding and an unwillingness to push for the higher levels of abstraction.
Of course, this is my personal opinion and does not represent that of my employer.
I’ve got an old Lenovo T410 Laptop on the Insider Ring. Win10 Redstone 5 (rs5) has been coming out on a regular basis. Yesterday I was notified that a new version was available for installation. It broke the device so bad that a full re-install of Windows from a bootable ISO on DVD was required without any recovery of data.
I accept that being on Insiders is a bit more risky with pre-release features but to have it borked so bad that I can’t even rollback to the previous iteration of Win10 is significant – I tried all the recovery options available after Windows itself hung on recovering the device e.g. Safe boot, install over the top preserving settings and data, system file checker and none worked (or rather failed with error messages that more or less said “dunno, I give up”).
I’m really hoping that a bunch of telemetry made its way to Microsoft to give them insight because I couldn’t get much out of the device from an end-user perspective.
Still, it was rather a trip down memory lane going through the recovery menu’s even if ultimately it came to nothing.
One of the ‘duh’ moments that now seems blindingly obvious. The received wisdom to change the default search engine in Edge is to navigate to Settings | Advanced Settings and pick one. However, the one I wanted – Google, wasn’t there. On previous devices it wasn’t there and only appeared for selection after an indetermine period has passed which could have been days or weeks by the time I’d bothered to look again.
I’ve just realised what’s going on, the fault being me the stooopid user. No alterative search engine options will be offered until you visit the serach page within your normal browsing activity i.e. Google wasn’t offered, I navigate to Google in the browser, look in Advanced Settings and there it is. Doh!
My company provides me with a Windows 10 based Laptop and the Cisco AnyConnect client in order to connect to Corporate facilities such as Email, Intranet and Business Apps. I’d recently uplifted my version of Win10 to 1709 (Corp allows both SCCM WSUS and Microsoft online updating and I’m allowed local device admin rights) and noticed that the AnyConnect client would always Connect then Reconnect and Reconnect again which was annoying, especially as I’ll only VPN in when at home or working at a client side.
Googling around suggested that IPv6 was the issue but disabling that in the Virtual Network Adapter that AnyConnect sets up didn’t change the behaviour. No other ideas sprang to mind so I re-ran the connect scenario as it was reproducible at the same time capturing a network trace with Wireshark. I also generated the AnyConnect client diagnostics using the ‘DART’ tool. Then settled down for an hour to run a side-by-side comparison. It looks like AnyConnect enumerates all the physical network interfaces, sets up it’s connection to the Secure Gateway (ie. VPN Server appliance) then later on finds another physical network interface which causes the entire configuration to be torn done and the VPN connection reestablished – twice.
The new physical interface was a vSwitch but one that had IP addresses allocated from the pool handed out by the Secure Gateway which was odd as that suggested it was AnyConnect’s own configuration causing the behaviour. It did however make me recall that I have client Hyper-V enabled and by default a vSwitch is created for my Hyper-V based VM. I disabled the client Hyper-V feature and now no longer get the 3-connect scenario.
Yay – success, but then I’m struggling to remember whether this was an issue with the 1703 build of Win10 as that was when I first enabled client Hyper-V. I don’t think so as it was annoying enough for me to diagnose, so I would have done so in the 1703 timeframe. Now it’s a call on which I want more – a quiet VPN connect or client Hyper-V? As I have an MSDN subscription and can create VM’s to my heart’s content in Azure I’m going with quiet VPN…
Microsoft Windows 10 1703 Build 16299.64
Cisco AnyConnect Secure Mobility Client 4.3.04027
I have an old Lenovo T 410 which I upgraded to Win 10. It’s been working fine even after an image backup and restore onto an SSD. Last week I noticed while playing with the Azure Resource Portal that I was working over WiFi only, my LAN switch port wasn’t even showing link detection.
I did the usual diagnostic steps – check all the physical elements such as cabling and ports, did a BIOS reset back to defaults, updated the driver, installed the Lenovo specific driver updates all without result. One thing I did notice was while in BIOS configuration the switch port link detection burst into life. When I booted back into the OS it went off again.
Uninstalling the NIC in Device Manager gave me link detection but the device itself naturally didn’t work. Weird.
After playing about with the device installation status for a bit I started going through the property sheet for the NIC to see if it was a setting such as TCP offload. Before I even changed a setting the link detection started working again. Turning off the WiFi radio showed I still had an internet connection which was good. This has survived a few reboots so I’m cautiously optimistic. However I’m baffled on why it fixed the problem, somehow the driver got tickled into life by the act of opening the property sheet.
Looking in the event log for the NIC I can see a single event about the driver not being migrated which must be a factor but I need to do some more reading to understand what that means.
When I span up an ubuntu client a couple of months ago I decided to use Firebird to access my Hotmail email account. Without really thinking about what I was doing I changed the default setting of the Account set up process to use POP as the email protocol.
I then happily used Thunderbird without issue. Sometime later I used the web interface of Hotmail and discovered that searching on specific terms gave back far less search results that I was expected. After a bit of investigation it turns out that Firebird has a default option of deleting email from the server 7 days after having been downloaded when using the POP protocol. This meant all the historic email I had in Hotmail now resided on my Unbuntu device and I could no longer access it from either the browser interface or other email clients.
I naively started to look at what programs existed out there to upload emails into Hotmail without success. Some time passed and I ran into an article comparing the POP and IMAP protocols. The former coincided with my understanding of the protocol, however, I realised that I had a gap in my knowledge of IMAP. It turns out that it’s a synchronisation protocol that ensures all devices have the same view of mail and folders rather than the classic client/server type of affair you get with POP. A far better overview is published here.
In Firebird I just saved the downloaded email into a separate folder from the inbox, and deleted my Hotmail account. I hen set the Account up again but made sure I didn’t alter the default of IMAP. Once synchronisation had completed I just drag and dropped the email messages in the saved folder back into the newly created inbox. Left it a couple of hours and hey presto – all my email is again available in Hotmail’s web interface (albeit as soon as I access a restored message it’s date/timestamp is updated to now meaning the message appears newer than it actually is).
I’ve had a 2012 Nexus 7″ for a while now and performance has been dropping off over the last few releases of Android, Lollipop being pretty poor but better than some.
I’ve just flashed the Nexus back to KitKat 4.4 and performance is really good again. There’s a lot of contravening steps out on the Internet but I stuck with the instructions provided by Google which worked out fine.
I was using Ubuntu to control the device and had to download the SDK to get to the compiled tools needed. Hundreds of MB for just a couple of executables. Still, it was a good introduction to udev to allow the Nexus to be recognised as a device.