Connecting Team City to Visual Studio Online using Git source control

I was on a client project that was using Team City for their builds. I migrated their source control from SVN to using Visual Studio Online (as they had less than 5 users it was free). But I had issues trying to find any documentation on how to successfully connect Team City to Visual Studio when you are using Git for the source control. Hopefully these steps will help someone else in the future.

Where I was going wrong was trying to treat it like it was a TFS project, instead I should have been treating it like a standard Git repository.

For this blog post I am using the TFS team project I created during a live demo I gave for the Windows Azure web camp.

Step 1: Enable alternate authentication credentials

You will need to tell Visual Studio Online to enable other tools (such as command line Git tools & Team City) to be able to log in using a username & password.

  • Load your team portal, then in the top right click “My Profile”.
  • Go to the credentials tab and click “Enable alternate credentials”

image

image

Step 2: Obtain the URL for your Git repository

The easiest way I’ve found to get the URL for your git repository is to open a Git command prompt and list the remote origin

  • In Visual Studio go to the changes section in the Team Explorer tab.
  • Click “Actions” and select “Open Command Prompt”
  • In the command prompt type
    git remote show origin

image

image

 

Step 3: Enter details into Team City

This is the part that confused me, as I kept trying to connect with the TFS plugin. The secret is to just treat it as a Git repository.

  • In your team city project, click to add a “New VCS Root”
  • Type of VCS: Leave as <guess from repository URL>
  • Repository URL: Enter the Git repository URL obtained from the command prompt (in this example it would have been https://ausazurewebcamp.visualstudio.com/defaultcollection/_git/MelbourneSoftdrink
  • Username: Enter your Visual Studio Online username
  • Password: Enter the password you created when enabling Alternate Credentials.

 

Once it has connected you should be able to just click “Auto-Detect build steps” and have Team City download your source code, and automatically find your .sln file.

image

Build 2014 – Day 1 keynote

For the last couple years it has been a tradition that I capture a stream of consciousness as I watch the big Microsoft keynote announcements at Build, PDC, TechEd North America. I enjoy doing it so that my work colleagues are able to catch up on the news as soon as they wake up in Australia, and for anyone else that wants an overview of the keynote without needing to dedicate hours watching it.

As I am live blogging it, the post is a stream of text and screen captures as they happened in real time. I have added additional links and a summary below as the highlights:

Highlights

The entire conference & screenshots are continued below

Read the rest of this entry »

Is it time to open source Silverlight?

Call to action: Vote on User Voice for Silverlight to be open sourced http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/5042512-open-source-silverlight 

For all intents and purposes Microsoft now views Silverlight as “Done”. While it is no longer in active development it is still being “supported” through to 2021 (source).
In today’s age of the “modern public web” with a variety of devices, Silverlight’s purpose no longer stands.

However there is still a section of the .Net community that would like to see further development done on the Silverlight framework. It has a nice collection of portable technologies allows it a small niche in the desktop environment. A quick look at some common request lists brings up the following stats:

Rather than letting Silverlight decay in a locked up source control in the Microsoft vaults, I call on them to instead release it into the hands of the community to see what they can build with it. Microsoft may no longer have a long term vision for it, but the community itself may find ways to extend it in ways Microsoft didn’t envision.
Earlier this year Microsoft open sourced RIA Services on Outer Curve http://www.outercurve.org/Galleries/ASPNETOpenSourceGallery/OpenRIAServices, it would be great to see this extended to the entire Silverlight framework.

We’ve seen what can happen with amazing technologies when they are released into the wild. e.g ID software released the Quake 1 source code to the community, it has since been extended greatly and ported to a variety of platforms. A version was even created for Silverlight http://www.innoveware.com/ql3/QuakeLight.html. Which makes sense as XNA running on Silverlight was a popular technology for students.

I’ve used games as examples of ways to extend it as that is what hobbyists usually latch onto first. But there are equal reasons why people still using it on internal LoB applications would want to continue to extend the core framework, e.g:

Silverlight still has a nice portable core of useful technologies, now is the time to start asking the question if it is time to Open Source it rather than let it mothball. There may be uses in the community for it now, in another 2-3 years its usefulness in the community would be lost. This also may be a great point to release Silverlight to the community.
Microsoft, let the community know if there is a way we can assist in making this happen.

By David Burela

Windows Azure Website issue with Portable class library

I was developing a website that had common logic held in a portable class library. However when I used Git deployment, the compilation on the Azure servers would fail with:
The reference assemblies for framework ".NETPortable,Version=v4.5,Profile=Profile78" were not found.

After much debugging it seems the issue is that the Azure build servers don’t have all the PCL profiles. As a shot term fix, you can go into your PCL properties, and remove support for Windows Phone 8. I changed my project to only support Windows 4.5 & Windows store, and this resolved the issue.

By David Burela

Error when using HTTP Portable class library & compression

I was trying to use the new HTTP Portable class library with the new compression capabilities (as described in this MSDN post).

I created a portable class library that retrieved data, and then used that library in my app. However my app kept throwing this error:
Method not found: ‘Void System.Net.Http.HttpClientHandler.set_AutomaticDecompression(System.Net.DecompressionMethods)’.

After searching for hours, I discovered the issue is that you need to add the portable HTTP client to BOTH your portable class library AND any app that consumes that assembly. I resolved the issue simply by adding the portable HTTP client Nuget package to my app.

Windows Azure editorial for MSDN flash

I was asked to write an editorial for MSDN flash. It hasn’t been published yet, but here is the raw article I submitted.

 

At Build 2013, Microsoft covered a lot of the recent activity within the Windows Azure. It is an exciting time to develop on the Azure platform. Here were the most interesting announcements for me at the conference:

The general availability of Windows Azure Websites and Windows Azure Mobile services was announced. Azure websites is an enterprise grade way of hosting websites that can easily scale and allow for rapid deployment. Windows Azure Mobile Services helps support the backend of your mobile apps. You can rapidly create tables to hold data and immediately expose it via RESTful services. Microsoft also help bootstrap your app with templates for Windows 8, Windows Phone, iOS, Android & HTML5.

Azure Websites & Mobile Services have tiers ranging from free to enterprise level. With the recent announcement that you can create 20mb SQL Azure databases for free, it is now very cheap to start creating your own projects on the weekend. While still having the reassurance that you are able to scale it up to an enterprise level when needed.

Within the Azure portal you can now configure deployments to automatically scale themselves up and down based on load. You can set an ideal CPU utilisation range, beyond which Azure will automatically provision or de-provision instances. There is also the ability to scale based on the length of a Windows Azure Storage queue for worker roles that process messages. The auto scaling announcement combined with the recent updates to “Pay per minute” pricing, means your applications can rapidly respond to load while keeping your costs as low as possible.

On top of all of this there has been an updated release cadence with more frequent updates published to Azure than ever before. There have been over 100 major releases to the Azure services since Build 2012. This has seen the capabilities of azure rapidly expanding, and it will be exciting to see this pace continue throughout 2013.

By David Burela

Build 2013–Day 1 Keynote summary

For those that weren’t able to watch the Build keynote live, I have provided a quick overview of the highlights as they happened.

Highlights

Summary of the Keynote

  • Today is about Windows 8 only. Tomorrow is Azure only
  • Showed the Windows 8.1 improvements that were in the blog post (multi-monitor support, resizable Metro apps, Start Button)
  • Windows 8.1 & IE11 support WebGL
  • Lots of talk about trying to embed Bing everywhere.
  • Support for your own protocols over USB/Wifi-Direct/etc
  • Support for 3D printers so it is as easy to use as 2D printers
  • Cool tech demos showing some interesting scenarios with new devices
  • However, not really any new information

(detailed breakdown and images continued in blog post)

Read the rest of this entry »

Community report: Australian Microsoft Bizspark Azure camps

Last week, I flew around Australia helping out with the Bizspark dev camps. I answered questions that the attendees had on development on Windows Azure as well as Windows 8 and Windows Phone.
I also got up front and did a mini presentation on development tips.

At the event we discussed ways that Windows Azure can help you quickly bootstrap your startup by providing a cheap way to deploy your web app to test the idea, and then scale it up once it starts to get popular. We also touched on how to build Windows 8 and Windows Phone apps that can be supported with Windows Azure Mobile services.

Below are some photos from the event
The Sydney Event:

WP_20130604_001

And the Melbourne event.

WP_20130608_001 (1)WP_20130608_002

I really enjoy participating with the developer camps & app fests. It is a great way to get a bunch of passionate people into a room and swap stories. It is always interesting to see what personal projects people have been working on and need help with.

By David Burela

Xbox One cloud + Geo-distributed Windows Azure datacentres

Previously I blogged about how Microsoft were creating new Windows Azure data centres in Australia http://davidburela.wordpress.com/2013/05/21/windows-azure-datacentres-coming-to-australia/.

Another recent announcement was Microsoft stating their upcoming Xbox One gaming console will be support “by the cloud”. From an interview:
“We touched on it briefly in our press conference, and I think it’s a really important point and a key differentiator for Xbox One. Essentially, no longer are we constrained to the processing power of Xbox One in terms of hardware; we’re able to do a lot of processing that was traditionally done on the box, in the cloud.
“It’s also been stated that the Xbox One is ten times more powerful than the Xbox 360, so we’re effectively 40 times greater than the Xbox 360 in terms of processing capabilities [using the cloud]. If you look to the cloud as something that is no doubt going to evolve and grow over time, it really spells out that there’s no limit to where the processing power of Xbox One can go.”
Source http://stevivor.com/2013/05/microsoft-xbox-australia-on-some-of-todays-lingering-xbox-one-questions/

Offloading processing power

For those who haven’t kept up to date on the announcements, the cloud support for the Xbox One has the potential to open up a lot more back end processing to support games than has been previously possible. Traditionally console games have been wholly self contained on the console.

Cloud support enables scenarios where you need processing done, but it isn’t latency sensitive. When you turn the console off, the game world stops until you turn it back on again. Having a cloud backend that can easily support “persistence” in your console game can make it more rich. For example while your console is off, the people in your castle can continue to go about their day to day activities so that when you come back, you can see what progress has been made in your world.

Other examples of latency tolerable calculations have been lighting effects in large areas and large physics simulations. Traditionally physics effects have been restricted to what can be processed in real time. But larger more complex simulations can be enabled now. In a first person shooter, a player could fire numerous rockets at large building. Each impact weakening the structure. Once the building has taken enough damage it can trigger the cloud to process a complex crumbling animation that takes each of the impact points into consideration. Waiting a few seconds for the results to come back is fine, as the building was slowly weakening to the point of collapse.

Game hosting

With quick match games like the original Halo, traditionally it would come down to one of the consoles acting as the game host with the other players connect to that one console. This ties up compute power on the console, as that one player needs to process game server information as well as render the scene for the local player. Using cloud hosting enables developers to offload the processing off the console to free up even more processor cycles to making the game look GOOD.

An interview with the developers of an upcoming game Titan Fall goes into great detail about the benefits the Xbox One cloud brings to them http://www.respawn.com/news/lets-talk-about-the-xbox-live-cloud/

One great quote is how they can scale their servers based on the demand. Removing the usual “launch day” woes of over provisioning to handle the initial wave of players.

How is this different from other dedicated servers?
With the Xbox Live Cloud, we don’t have to worry about estimating how many servers we’ll need on launch day. We don’t have to find ISPs all over the globe and rent servers from each one. We don’t have to maintain the servers or copy new builds to every server. That lets us focus on things that make our game more fun. And best yet, Microsoft has data centres all over the world, so everyone playing our game should have a consistent, low latency connection to their local data centre.
Most importantly to us, Microsoft priced it so that it’s far more affordable than other hosting options – their goal here is to get more awesome games, not to nickel-and-dime developers. So because of this, dedicated servers are much more of a realistic option for developers who don’t want to make compromises on their player experience, and it opens up a lot more things that we can do in an online game.
Source http://www.respawn.com/news/lets-talk-about-the-xbox-live-cloud/

Bringing the two together

The reason I tied these two announcements together (Xbox One cloud & Microsoft opening data centres in Australia) is because it will help Australian consumers a lot. While some tasks offloaded to the cloud can be handled in a high latency situation (economy simulations), there are other lots where lower latency is always better (hosting multiplayer games).

In low latency situations Australia always gets the short end of the stick due to our physical distance from both the USA and Europe. The announcements of Microsoft releasing Windows Azure datacentres which could potentially host Xbox One servers is an encouraging situation.

If my crystal ball is correct on this one, then local cloud compute happening in Australia will be a great boon to all Australian gamers.

By David Burela

Windows Azure introduces pay-by-the-minute feature

A concern when architecting for the cloud is dealing with elasticity. The pricing of Windows Azure on compute / VM roles had an affect on how you dealt with scaling for demand. Previously if you spun up an instance for 20 minutes to temporarily deal with additional load, you would be charged for a full hour. This lead to a 1 hour granularity when evaluating when you should scale. Your best bet was to evaluate on the hour if you need to spin more up, and then 10 mins before the hour was up, evaluate if this is a good time to take them down again.

Microsoft announced updated pricing, where they will now be charging per minute.

http://blogs.msdn.com/b/windowsazure/archive/2013/06/03/announcing-new-offers-and-services-on-windows-azure.aspx 
http://blogs.technet.com/b/firehose/archive/2013/06/03/windows-azure-introduces-pay-by-the-minute-feature.aspx

This is great news, as it allows finer granularity in how services are scaled. I’m sure a lot of people will be excited about this.

By David Burela

Follow

Get every new post delivered to your Inbox.