By Dan Scarfe, Dot Net Solutions
Recently Gartner released their magic quadrant for Cloud Infrastructure as a Service Providers: http://www.gartner.com/technology/reprints.do?id=1-1IMDMZ8&ct=130819&st=sb.
It was fantastic to see Windows Azure placed in the visionary quadrant, outlining the breadth and depths of the platform and its completeness in Gartner’s eyes. We’re seeing the IaaS offering, built on over 5 years’ innovation on the core platform, getting huge amounts of interest from our customers.
Disclaimer: Dot Net Solutions is one of Microsoft’s leading Cloud specialists. I’ve been around Windows Azure since its inception - I’ve lived and breathed it for 5 years. Our business is built on it, but that’s a choice we made because we really like it. A lot.
When reading any magic quadrant report, everyone immediately looks at the picture, what’s far more interesting is to read the commentary.
Leader, Amazon Web Services, is praised for its sheer scale and being extraordinarily innovative, exceptionally agile and very responsive to the market. The cautionary note points out that though Amazon is apparently the price leader (Microsoft has publically stated to price match Amazon), it charges separately for optional items that are often bundled with competitive offerings. These include things such Load Balancers and free connectivity within an entire region, which are available free of charge with Azure. More cautionary is the note about Amazon having multiple generations of compute instance "families" — such as the m1, m2, and m3 families. A recent independent report on Cloud performance showed Azure running Linux VMs three times faster than AWS.
Windows Azure was also praised:
“Microsoft has a vision of infrastructure and platform services that are not only leading stand-alone offerings, but also seamlessly extend and interoperate with on-premises Microsoft infrastructure (rooted in Hyper-V, Windows Server, Active Directory and System Center) and applications, as well as Microsoft's SaaS offerings. Its vision is global, and it is aggressively expanding into multiple international markets.”
This hybrid approach to service delivery is a key part of Microsoft’s vision. Microsoft’s large on-premises installed base perceive Windows Azure as an easy on ramp to public Cloud. UI – a key part of Microsoft’s historic success also shows up in Azure:
“Microsoft has built an attractive and easy-to-use UI that will appeal to Windows administrators and developers. The IaaS and PaaS components within Windows Azure feel and operate like part of a unified whole, and Microsoft is making an effort to integrate them with Visual Studio and System Center.”
One of the other key value-adds with Azure is the range of additional services which are available, often without cost. These includes Windows Azure Active Directory (free up to 500k users), Mobile Services (free up to 10 services), Web Sites (free up to 10 sites). Paid for services such as Media and BizTalk also offer great additional functionality.
On the cautionary side:
“Windows Azure Infrastructure Services are brand-new and consequently lack an operational track record. The feature set is limited and the missing features are ones that are critical to most enterprises. Although Microsoft has a generally good uptime record with Azure PaaS components, it will be challenged to scale its IaaS business rapidly.”
The other criticism was a lack of Linux distributions and language support beyond .NET. This really isn’t the case and Windows Azure actually supports a broad range of languages for its PaaS model and the only major distribution omission is RedHat, which hopefully will be available soon.
The other part of the report I don’t agree with is Microsoft’s score for ability to execute. Two of the ‘high’ rated measures were Viability and Track Record.
Viability describes the:
“success of their cloud IaaS business, as demonstrated by current revenue and revenue growth since the launch of their service; their financial wherewithal to continue investing in the business and to execute successfully on their road maps; and their organizational commitment to this business, and its importance to the company's overall strategy.”
Windows Azure is already a $1bn business. Storage and compute (and associated revenues) are doubling every six months. Cash shouldn’t be a problem with a $77 billion cash mountain which will be burning a hole in the pocket of Ballmer’s replacement. Azure has been the shining light in all of the recent press around Microsoft.
Track Record describes a market that is:
“evolving extremely quickly and the rate of technological innovation is very high. Providers were evaluated on how well they have historically been able to respond to changing buyer needs and technology developments, rapidly iterate their service offerings, and deliver promised enhancements and services by the expected time.”
Microsoft is a completely different business to the one in 2011, or even 2012. It’s been breath-taking to watch. “We’re all in” has now become “Cloud first”. Every major new Microsoft product will be Cloud first. The release cycle for new products has shifted from 3 years to one year. For Azure itself, it’s every 3 months.
One of the other values was sales execution and pricing which is an:
“ability to address the range of buyers for IaaS, including developers and business managers, as well as IT operations organizations; adapt to "frictionless selling" with online sales, immediate trials and proofs of concept; provide consultative sales and solutions engineering; be highly responsive to prospective customers; and offer value for money. This criterion is important to buyers who value a smooth sales experience, the right solution proposals and competitive prices.”
This is the crucial piece. With organisations streamlining procurement and reducing the number of suppliers, a provider able to sell Cloud effectively as an extension of a pre-existing commercial agreement is very powerful.
“Microsoft's brand, existing customer relationships and history of running global-class consumer Internet properties have made prospective customers and partners confident that it will emerge as a market leader in cloud IaaS. The number of Azure VMs is growing very rapidly. Microsoft customers who sign a contract can receive their enterprise discount on the service, making it highly cost-competitive. Microsoft is also extending special pricing to Microsoft Developer Network (MSDN) subscribers.”
Having an effective partner network able to scale and deal with demand is crucial and is something that AWS lacks:
“AWS has field sales, solutions engineering and professional services organizations, but the rapid growth of AWS's business means that sales capacity is insufficient to consistently satisfy prospective customers who need consultative sales.”
Microsoft has an extensive account management structure and over 40,000 partners in the UK able to sell Azure. That’s a lot of bodies.
Azure still needs to do a bit of catching up from a functionality perspective according to the study, but we need to bear in mind the actual date of the study was only 13 days after Windows Azure IaaS went live. Since then there have been a number of major new releases. However, right now on August 28th 2013 there are some features that aren’t available on Azure that do exist on AWS. Azure is missing direct connections between its datacentres and customers’. Azure is missing long-term offline storage and hard drive uploads. Oracle is only available as a VM, not as a service.
But that’s the case today and not necessarily next week or next month. The speed of innovation on Azure is staggering right now.
The other potential players are split in two:
VMWare customers such as Savvis, Terremark and CSC. Terremark just announced a furthering of their strategic partnership with VMWare. These vendors will struggle to differentiate themselves from each other. VMWare’s hybrid story is all about ease of portability between customers’ private Cloud and its public Clouds, but this also means to any VMWare hybrid partner. The report also points out that whilst it is: ”straightforward to move VM images from one cloud to another, truly hybrid multicloud scenarios are rare.”
OpenStack providers including RackSpace and HP. The problem with OpenStack partners, in the same way as VMWare customers, is differentiation. As soon as “partners” seek to differentiate themselves by building protected IP on an open platform the community starts to break down. The same can be seen on Android, where Amazon run a private branch and Google effectively owns the public branch. Also the report points out the fact that “an ecosystem is "open" has nothing to do with actual portability.”
For everyone else
“The gap between the market share leader and the rest of the market is widening. Many providers have solid offerings that encompass the most fundamental capability in this market — the ability to provision VMs rapidly on-demand, coupled with storage and an Internet connection. But most are finding it challenging to move beyond this point. Customer expectations are increasing, use cases are broadening, and many providers have neither the ambition nor the resources to compete across the full breadth of the addressable market.”
The report highlights that, today, steady load on machines can be cheaper on premises. But it’s only cheaper if you have existing investments and existing staff. A huge part of the cost of each IaaS instance is the cost of running, housing and looking after the box. That’s why Cloud has been so popular with start-ups. But start-ups are being born each day inside existing companies, and if those start-ups and joint ventures are adequately shielded from legacy, retained costs, Cloud can provide significant commercial advantages. It can also provide a clear cost breakdown and removes the IT “black hole” line item. Sometimes though, time is money, and getting out of the blocks early can be worth a premium, even if you bring it back on-premises in the future.
Cloud IaaS is still, very much, in its infancy. New terms and new concepts are still be invented. One of the most interesting parts of the report was a new phrase – “Cloud Enabled Systems Infrastructure”. This phrase could be seminal as it describes a model of Cloud consumption which is truly friction free. For widespread adoption of Cloud amongst enterprises to happen, it has to be easy. Really easy. Only now are we starting to see a generation of Cloud platforms, be they VMWare or Hyper-V based (although interestingly not AWS) where public Cloud becomes an extension of your internal private Cloud or legacy infrastructure. When moving an application between your datacentre and a service provider’s is as easy as ordering something through the Amazon Mobile app, then it will hit critical mass. And today as an industry we are almost there.
The next two years are going to be incredibly fun to watch pan out. I think it will turn into a two horse race between AWS and Azure and I’m delighted to have a front seat.
Find out more about Windows Azure www.windowsazure.com
Find out more about Dot Net Solutions www.dotnetsolutions.co.uk
By David Allen, Microsoft System Center MVP.
IT process automation is the ability to orchestrate and integrate tools, people and processes through a workflow. Automation benefits include, reduced human errors, faster response to problems and more efficient allocation of resources.
By increasing the levels of automation and eliminating common, repetitive tasks, companies can reduce operational costs, as well as reduce the amount of specialized staff needed to manage its systems. This also means, highly skilled IT professionals can be freed up to manage more strategic company projects and initiatives, and ultimately provide a higher quality of service to this business. Although the benefits of automation meets the business these objectives of reducing costs, increasing productivity, and maximizing efficiency, the biggest concern is integration between applications and infrastructures. Automation should be a best practice in most organizations as it will assist in managing and integrating across increasingly complex infrastructures.
Below are three common uses & examples of IT automation that save time and money in the data centre:
1. Self Service
· Although self-service isn’t a specific example of automation, it is a very important offering that is driven by automation. As processes become automated, the next logical step is to provide a portal for end-user self-service, which will further reduce the amount of time required by IT staff and further the reduction in cost.
The Windows Azure Pack for Windows Server provides a portal, consistent with Azure, to provide users with a consistent view of the self-service services available within the data centre, a service provider, or Windows Azure.
Figure 1 - Windows Azure Pack
The Windows Azure Pack, once implemented, provides the portal as shown in the image above. This portal allows administrators to define self-service services that are available to users, whether they be virtual machine, website or database provisioning. The real power of Windows Azure Pack for Windows Server though is in its extensibility, as the portal can be easily updated to include any company specific automated processes, such as application or user provisioning.
2. Virtual machine provisioning
· In today’s virtual world, the provisioning, and de-provisioning, of virtual machines is common place, and many hours are spent following the business processes for this to occur and to technically perform the work required. However, in most scenarios, the provisioning and de-provisioning of a virtual computer can be automated, and with System Center 2012 it’s not only possible to achieve this for single computers, but for multiple computers as part of an application; the deployment of multiple virtual computers, with different configurations, based on a template, can be fully automated.
Figure 2 - System Center Virtual Machine Manager Service Template
A service template, as shown above, defines the configuration of a service. The service template includes information about the virtual machines that are deployed as part of the service, which applications to install on the virtual machines, and the networking configuration needed for the service. A great example is shown in the image above, where there is a SQL Server provisioned with a number of DAC packages applied, a middle tier computer configured with an application delivered by Server App-V, and a web tier computer with the required roles and features enabled, and all communicating on the same network.
3. New User Provisioning
· The provisioning of new user accounts in Active Directory is usually a very repeatable task, which over a length of time can consume a large number of work hours. By automating this basic task, accounts can be created in a reliable and standardized way, and can free up resources for other more important IT tasks.
Figure 3 - System Center Orchestrator Runbook
This example is one that can be extended to provide far more functionality than simply creating a user account and mail enabling the user. Some great examples of automated user provisioning have included, setting group memberships based on department, generating a random password and emailing to manager, setting network access permissions, setting expiry date and assigning to the relevant group or organisational unit.
The above are three common examples of automation, and for each there will be more than one way to achieve the desired outcome. With the use of Microsoft Windows Server and System Center, automation can be provided not only across the Microsoft stack of products, but also the majority of other third party products, with out of the box integration into VMWare ESX and XenServer, support for cross-platform operating systems, and integration into HP, IBM, CA, BMC and other vendor applications. This diversity provides everything required to implement automation in almost any data centre, as you do not have to have all the System Center products deployed; automation can still be achieved if a CA product is used for service management, for example.
In summary, automation helps to automate, integrate, and orchestrate operational processes across multiple data, departmental, and application silos. Using System Center for automation can enable and enforce best practices in an IT organization and help align IT services with business objectives through repeatable, reliable, and standardized best practices. However, it must be remembered that having great tools available will not make bad process better. It is important to make sure the business processes are fit for purpose before automation of the process is attempted.
So, where to start…. Well, initiatives such as ITIL outline best practices for all IT activities, and the service support areas of ITIL are, incident, problem, configuration, change, and release management, and these make up the daily operational tasks within IT. As ITIL best practices are continued to be implemented, this is a good place to start with automation since these are usually the most critical areas of IT operations.
Find out more information on Windows Azure Pack for Windows Server, Windows Server and System Center.
By John Berkoski - Head of Consulting, PointBeyond Ltd
Our consulting business focuses on developing advanced SharePoint-based business solutions for customers who want to get more out of their existing SharePoint installations. We knew we were good at it by the number of repeat customers we were seeing!
Sometimes customers wanted additional solutions, but on occasion they wanted general support – a trustworthy resource they could ring up for quick advice, help with troubleshooting, or for additional enhancements for their solutions. Our support service was created as a result.
To date, we had been using a home-grown set of SharePoint sites and lists to manage our support cases. In the beginning, with just a few support customers, the caseload was easy enough to manage. However as we added more support customers, the amount of case management time required seemed to increase exponentially and before long we were spending more time managing cases than actually resolving them.
Fortunately, we were able to “eat our own dog food” by capitalising on our own expertise in helping customers with workflow solutions. In literally little more than a day, five members of our consulting team were able to create a number of workflow-based mechanisms to assist us with support case initiation, updates and reporting. We were able to deliver the solution this quickly because of the benefits that working with workflow offers over traditional software development. We now have a fully automated case management system that eliminates most of the manual management tasks that had been encumbering us.
How did we build the solution so quickly, you may be wondering? We blocked out a day for part of our team to meet at an off-site location to bash out a solution for our growing support business. Our day started with a couple of hours of planning, where we discussed requirements, did some white-boarding and wrote lots of user stories on sticky notes. We then divided up the work, and by lunchtime we had our first processes completed.
We continued to iterate through the backlog of screens, processes and reports and by the end of the day we successfully achieved our goal of a fully functional new support system. More remarkably, we were able to achieve this through out-of-box functionality, with a bit of SharePoint Designer magic thrown in, and with no custom code whatsoever. We’re now piloting the system in anticipation of a full rollout once the system has stabilised.
When fully rolled out, our customers will benefit from the following new automated features:
· Automatic acknowledgement when a support case has been received and assigned
· Automatic updates when we start or make progress on a case
· Periodic updates on all open cases with latest status brief
· Notifications when we become aware of critical updates from Microsoft
Additionally, our team members will also benefit from new, automated features as well:
· Notification when a case is assigned to a team member
· Periodic reminders of open cases assigned to a team member
· Notification when a customer accepts a resolved case as closed
· Robust reporting on aged and closed cases
As a footnote, this development effort was a bit typical to our normal customer-facing engagements, so the “done in a day” headline is perhaps a bit misleading. In addition to the efforts of the day, we needed a bit of advance planning for both development infrastructure and system design. The system also required testing and a few tweaks here and there. We wrote little documentation. Even the day of development represented a five man-day effort for a typical consulting engagement. That aside, that we were still able to build a complete bespoke business application in days rather than weeks or months is still quite remarkable!
Providing this level of service to our customers and team would have otherwise required that we hire a full-time support administrator to assist me with managing cases and creating reports. Instead, with a small bit of workflow automation, testing and deployment, we were able to create a bespoke, professional quality support solution that allows us to once again focus our time on resolving customer issues instead of managing them. Oh, what a relief!
About the Author
John Berkoski, Head of Consulting, PointBeyond Ltd
John has over twenty years’ experience of leading and delivering SharePoint and bespoke software solutions to large and small clients in a wide variety of industry areas. He manages the PointBeyond delivery team and oversees the management of projects.
By Jeremy Thake, VP of Global Product Innovation at AvePoint Inc.
You’ve got your Microsoft SharePoint deployment up and running, and your SharePoint Admins have spent countless hours tidying things up to ensure your infrastructure architecture is under control. But there has to be an easier way to keep your SharePoint running at an optimal level without devoting the IT man hours right? Well there is – through automation.
An enterprise platform like SharePoint allows individuals to get together and share information in a container (a room, site, bucket, etc.). This container will have a lifecycle from the time it’s provisioned to the time it’s deprovisioned, and guess what - provisioning a new container is a task that can be automated too. But there are a host of other events that occur during the lifecycle that can be automated and taken off the plate of your IT department.
Here’s my top 10 SharePoint automation list
1) Self-service granting/removing access – The most common task a container will go through is the granting of permissions and removing of permissions to that container so users can be part of the collaboration.
2) Self-service transferring/cloning access – The ability to transfer permissions from one individual to another or give someone the same permissions as another user is a very common scenario.
3) Self-service onboarding content – When containers get created, often there is content that already exists in file shares, on local drives, and in people’s My Sites that needs to be moved into the new container.
4) Self-service change business contact – Typically a business contact might change during its life, either due to people changing roles and responsibilities or leaving the organisation. Keeping track of who is accountable for the container is one of the most important points as they go through the lifecycle.
5) Self-service deployment of customisations – Customisations such as branding, content types/document types, extra functionalist, or apps being added.
6) Self-service archiving of container – Often business contacts will want to clean up existing containers where information that has been created over three years ago, and not accessed or modified, can be archived off to the Enterprise Archival System, deleted completely, or simply marked as an archived area.
7) Scheduled lease renewal of container – With all containers, at some point they will need to be deprovisioned. The business contact is reached out to on a scheduled basis to ask whether they still require this container.
8) Scheduled inactivity alerts of container – With all containers, sometimes they will go dormant and unused for a while. This is a good opportunity to reach out to the business contact through a scheduled alert to find out if they really still need the site.
9) Scheduled security audit recertification – This is especially important in financial services or public sector industries. This is a scheduled alert to the business contact who is accountable for a container to recertify that the people who have access to the content should have access.
10) Scheduled archiving of old content – As well as self-service cleanup, scheduled archiving profiles can clean up proactively rather than waiting for business contacts to keep them accurate and up to date with quality content.
By automating these tasks, you’re taking the burden off of your IT department to manually perform each one, while empowering the end user to take some control of their own content. This will allow your IT department to spend their time on more important issues, while still ensuring that your SharePoint deployment stays neat and tidy.
About the Author:
As AvePoint’s Vice President of Global Product Innovation, Jeremy utilises his software consulting, development, and architect experience as well as his deep expertise in Microsoft technologies – recognised as a Microsoft SharePoint MVP since 2009 – to educate the global SharePoint community. Jeremy also works directly with enterprise customers and AvePoint’s research & development team to develop solutions that will set the standard for the next generation of collaboration platforms, including Microsoft SharePoint 2013.
By Asavin Wattanajantra, writer at Metia.
When we’re talking about fire risk in IT, security is the tinder dry forest waiting for a wayward spark. While it’s great to have the tools and processes in place to put out fires and minimise their impact when they inevitably start, prevention is always better than cure.
If you can stop the fires starting in the first place, you’ll have more time to focus on the business, comfortable in the knowledge that systems are secure.
Windows 8.1 has numerous features to help ensure that the risk of a security-related fire starting in your business is minimised. Here are five of the top risks and how Windows 8.1 can help:
1. Risky personal devices
Windows 8.1 is part of our quest to make sure that as many devices as possible, both enterprise and consumer, have the tools that make life a whole lot easier for IT administrators.
Windows 8.1 continues the trend of building security devices or chips with cryptography functions into laptops – a Trusted Platform Module (TPM) can make hardware ready out-of-the-box for Bring Your Own Device (BYOD) requirements. TPM 2.0 is required for all connected standby (InstantGo) devices, which Windows 8.1 is designed for.
2. Devices without encryption
Whether it's due to a mistake by an employee or the work of an opportunist thief, there's always a chance of IT equipment going missing. A criminal might get more use out of financial details held on in Excel spreadsheet than from the stolen laptop itself.
BitLocker Drive Encryption prevents criminals from accessing confidential information by scrambling the data on entire volumes. With Windows 8.1 there is BitLocker support for device encryption on x86 and x64-based computers, with a TPM that supports InstantGo.
3. Corporate data leakage
Encryption is good, but control is better. Windows 8.1 wipes corporate content from a device if needed, while keeping personal data untouched.
Workplace Join gives employee access to enterprise files from their device, but restricts access to the whole system. By giving them permissions, they can only work with the files they've been briefed to work with.
4. Forgotten passwords
We know passwords can be hard to remember. For simplicity people often use the same password for every login. But in making life easier they also create a significant security risk. Biometrics solves the issue: a unique identifier that can’t be forgotten or stolen.
We now support biometric security devices with fingerprint readers running Windows 8.1. Every time a user sees a Windows credential prompt, they can use biometrics, thereby eliminating the need for a password for secure sites and in-app user account validations.
IT departments wage a daily war against ever-evolving viruses, worms, Trojans and other malware. Windows 8.1 has been battle-hardened with enhancements. Improved Windows Defender offers real-time protection against threats, with high-performance monitoring that can detect bad behaviours in the memory, registry or file system, before signatures are even created.
As every fire fighter knows, we’re never going to reduce the risk of fires starting to zero. We need to expect them to happen and be ready to respond quickly when they do. But steps can be taken to minimise the likelihood of security being breached in the first place, and Windows 8.1 gives you the tools that mean you’ll be able to get on the front foot.
In a busy IT department it can be difficult to think about long-term planning. But there are (rare!) times when you've successfully fought all the fires you need to that day and you’ve freed some time to strategically plan; to really think about how to make your organisation’s technology run more smoothly. Here are some activities for all the time you free up, all of which could make your job much easier in the long run.
Assess your IT
If it's been a while since you reviewed your technology, your business may have already outgrown its software and hardware. Have a look at where you can make changes. Questions you could ask include:
Find ways of saving money
Although the economy is improving, budgets are tight. That's why you may have to think about cloud computing, which moves resources outside of the business. This means you can reduce the burden on your IT, spread your costs, and instantly upgrade. Think about software such as Office 365, which provides business-class work tools for a monthly subscription.
Virtualisation could be an option, which allows you to use less system hardware and extend its lifetime. You can also reduce management and maintenance costs, while more efficient servers make it easier to deploy software so your business becomes more agile. Windows Server 2012 may be a good choice if you're heading down this road – it has built-in virtualisation technology.
Get the right expertise in
Sometimes it pays to get in the experts. If you suspect your business will find it difficult to perform a full-scale upgrade by itself, then it could make sense to bring in a reseller which does. Reputable resellers will help you pick the hardware and software suitable for your business, with little downtime. They can:
Look into the future
Is it time to upgrade your systems? Sometimes it’s a necessity. IT departments running Windows XP are going to have major problems from next April as support is ending, leaving them without required security patches from that date forwards. IT managers need to seriously consider its replacement, such as Windows 8, and be ready for a touchscreen and Bring Your Own Device (BYOD) future.
Any purchases or upgrades you make need to fit into your long-term IT strategy. You should understand the big picture. IT is such an important part of any business that any decisions you make will cause ripples from top to bottom. Of course, doing nothing is always an option, but do you want to take that risk?
Last month, we at Microsoft were acting good on our monthly theme of sprucing up over the summer.
I personally was cleaning in and around my desk, which seemed to be a larger job than first thought, mostly down to the jungle of warped wires which had accrued between my monitor, laptop and other desk appliances.
This being just around the same time we were thinking of competition idea’s we could run to provide you guys and girls a few freebies - the #TechNetTidy idea was born.
A small fun competition kicked off through our social channels this month, giving you the chance to share with us and the world wide web an insight into your technical room heaven or hell, and of course to win a few Microsoft goodies along the way.
Two awards were up for grabs -
1. The not so tidy - ‘Chaotic Cabled Colin’.
2. The perfectly organised - ‘Super Server Sammy’.
Firstly, a big shout out to all who entered.. we had some great entry's this month - less super servers than we expected, more server room nightmares. An honourable mention goes out to this month’s mascot 'Dangerous Brian'.
However, after much thought and consideration we agreed upon this month’s winners.
Congratulations, an Exclusive #TechNetTidy T-shirt, and award related goodies are on their way out to both of you in the post.
Anyone who has entered the competition thus far, don’t fear as #TechNetTidy will continue next month, there is still plenty of time to win exclusive goodies, whilst also sharing your server room heaven or hell with us and the TechNet audience.
To find out how you can get involved, or to check out our rules and T&C’s, click here.
Please note This an after hours post, specifically about connecting a Canon EOS 6D to windows 8/8.1. I have written it for two reasons - so I can remember how to do it and because this you might need to do something like this for a camera enthusiast that you know who isn’t a networking guy.
Canon have made it relatively easy to connect the new EOS 6D 70D etc. to your Android or IOS device and to a wifi hotspot to which your PC/laptop is connected. However what I wanted to do was to configure windows 8 as an ad hoc wireless connection point so I could remote shoot via wireless from my Surface Pro anywhere I happened to be; jungles, mountains, and the various events I go to. However Windows 8 doesn’t have a UI for this anymore so you need to run a couple of netsh commands from an elevated prompt to get this working:
netsh wlan set hostednetwork mode=allow ssid=MyWIFI key=MyPassword
netsh wlan set hostednetwork mode=allow ssid=MyWIFI key=MyPassword
netsh wlan start hostednetwork
netsh wlan start hostednetwork
..where MyWIFI is the wireless network name you want and MyPassword is the password to connect to it. What this does is to add a new adapter into network connections..
In my case I renamed my connection to Canon and also note that Deep6 has a three after it as I tried this a few times! Another thing you may see on forums is that you need to setup sharing when creating connections like this and that’s only true if you want to do the old internet connection sharing. I don’t need to do this for this scenario which is just as will as our IT department have prevented me from doing this in group policy
On my Canon EOS6D I need to enable wifi
then set it up by selecting the wifi function which is now highlighted. From here I want to set up a C connection which is the Remote Control (EOS Utility option)..
I have already don this a few times ..
so to set up a new connection I choose unspecified. Now I ned to find the network I created on my Surface Pro by finding a network..
My ad hoc network is called Deep6 as opposed to FAF which is my home wireless network..
my key is in ASCII so I select that on the next screen and then I get this dialog to enter my password ..
Note you have to use the Q button on the back of the camera to enter the text window. I am asked about ip addresses I select automatic as my wireless network will do that for me. Then I can confirm I want to start pairing devices..
and then I will see this..
I can now check that my 6D is talking to my new wireless access point (which I have called Deep6.
as you can see I have one device connected.
So now I can use the supplied Canon software, the EOS Utility, to control my camera. Or so I thought, only all the control options are greyed out. This is because you need to change the preferences to install and configure the wft utility which detects your Canon and allows you to control it. To do this select the option add WFT pairing software to the startup folder
You’ll then get a little camera icon In your system tray and when your Canon is connected it’ll pop up this window..
click connect and you’ll see an acknowledgement and confirmation on the camera..
in my case my Surface is called Vendetta. I click OK, and I am good to go and the camera saves the settings for me, which is great and in fact I can save 3 of them. In my case I have saved my surface connection and FAF to connect to my home wireless router.
The Canon EOS Utility will now work..
Now I can start to have fun with this setup and my shots get saved to my Surface Pro..
In my last post I mentioned how Marcus Robinson & had adapted PowerShell scripts by Thomas Lee to build a set of VMs to run a course in a reliable and repeatable way. With Marcus’s permission I have put that Setup Script on SkyDrive, however he has a proper day job running his gold partnership Octari so I am writing up his script for him.
Notes on the script.
Unattend.xml is the instructions you can give to setup the operating system as it comes out of SysPrep. Marcus has declared the whole unattend.xml file as an object $unattendxml which he then modifies for each virtual machine to set it’s name in active directory, the domain to join it to, its fixed IP address and its default DNS Server
He makes use of functions to mount the target VHD and then copies in the modified unattend.xml and then dismounts it. Overarching this is a create-VM function which then incorporates these functions to create his blank VMs with known names and ip addresses. However these VM are not started as their is as yet no domain controller for the other lab VMs to join
LabSvr1 in the script is gong to be that domain controller so the first thing to do is add in the AD-Directory services role and here not the use of the PSCredential PowerShell object to store credential and convert-to-secure-string for the password so that the script can work securely on the remote VM’s. Starting the VM requires a finite amount of time so Marcus checks to see when it’s alive.
Marcus then has a section to install various other workloads all from the command line Exchange, SharePoint and my favourite SQL Server. Before he can install some of these he has to install prerequisites such as the .Net Framework 3.5 (aka the NET-Framework-Core feature) and do that he puts in the Windows Server install media. Having installed SQL Server he can then copy in the databases he needs and here I might have attached them as well as SQL has PowerShell to do this..
$sc = new-object System.Collections.Specialized.StringCollection $sc.Add("C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Mydatabase.mdf") $server.AttachDatabase("myDataBase", $sc)
Anyway there’s lots of good stuff in here, and I’ll be using it to make my various demos for our upcoming IT Camps, now that the Windows 8.1 Enterprise ISO is on MSDN subscriptions.