Visual Studio 2017 Installer New Features

Visual Studio 2017 Installer New Features

At a high level the installation has been broken down into smaller chunks called workloads. You select which workloads you want to use and only the components required for the selected workloads will be installed. If you want to you can still fall back to the old method of installing individual components. The Visual Studio setup is available as an on-line installer only. You can’t download the ISO from MSDN or the Microsoft website any more. You can create your own off-line installer but more about that later in the post.

The Package Cache

For the installer to work and also perform maintenance tasks like updating and repairing your Visual Studio (VS) installation the VS installer will keep downloaded packages in a package cache. By default the package cache is in:

"%ProgramData%\Microsoft\VisualStudio\Packages"

At the time of writing the ability to disable or move the package cache was in preview, instructions here. Using the VS installer you can disable the package cache but that would have some nasty side affects. Every package the VS installer needs for updating or repairing your installation will be downloaded from the internet and deleted again after the installation is done. You can also specify the location to use for the package cache in the registry. If you create the registry key before you start the installation the VS installer will place the packages in the specified location. For existing installations you have to manually move the packages to the new location.

Creating An Offline Installer

You can create an offline installer using the –layout switch of the VS setup exe:

vs_enterprise.exe --layout c:\vs2017offline [--lang en-US]

The full VS 2017 download is quite large but you can save some space by specifying which language you want to use. By default it will download all of them. Keep in mind the VS installer will download files to your AppData temp folder before moving them to the layout folder. I ran into a problem with one installation where the installer was caching the whole installation in the AppData temp folder on my C: drive even though I was creating the offline installer on a different drive. I was unable to determine the cause but keep that in mind if you don’t have enough free space on your C: drive. To install VS using the offline installer you have to install the certificates used to verify the downloaded packages, instructions here.

Updating The Offline Installer

By default VS will look for updates online even if it was installed from an offline installer but you can force VS to look for updates in the offline installer folder by changing its ChannelURI. You have to edit the response.json file in your offline installer directory and change the ChannelURI value to point to the channelManifest.json file in your offline installer folder, full instructions here.

To update the offline installation folder you can run the VS installer with the
–layout parameter. Pointing it to the existing offline folder, the VS installer will download any new packages to the folder and update the channelManifest.json file. VS installations whose ChannelURI is pointing to the offline installer will pick up the updates from the folder instead of downloading them. At the time of writing the VS installer didn’t purge old packages from the layout folder, it only added new ones so the folder can grow significantly. I guess it makes sense since older VS installations would still need the older packages to repair installations or add features.

Setup Tools

There are also some new tools to detect Visual Studio installations instead of rummaging through the registry.

  • The VSSetup Powershell Module can be installed from the PowerShell gallery using:
Install-Module VSSetup
  • VSWhere exe can be downloaded from the GitHub page here.

Francois Delport

Application Insights Customisation And OMS Integration

In this post I’ll be taking a look at Application Insights customisation and OMS Integration.

Custom Dashboards

You can pin the out-of-the-box charts to your Azure Portal dashboard by clicking on the pin in the top right corner. Analytics query charts can also be pinned using the same method but they are pinned to shared dashboards. Shared dashboards exist as an Azure resource in a resource group and you can use RBAC to control access to it.

Application Insights Customisation And OMS Integration

Alerts

You can create alerts based on the telemetry in Application Insights, you will find the Alert Rule button in the Metrics Explorer blade.

Application Insights Customisation And OMS Integration

 

Currently you can only create alert rules based on out-of-the-box telemetry but not from custom events or analytics queries. Good news is the feature is in preview so it should be available soon, link to uservoice.

Correlating Entries And Custom Properties

By default Application Insights populates the operation_id for every web request and propagates that operation_id to dependencies that it is able to trace out -of-the-box for example SQL Server queries and WCF calls to HTTP endpoints. The example below is for SQL Server queries joined to web requests.

Application Insights Customisation And OMS Integration

If you have a dependency that Application Insight can’t trace automatically or you have multiple levels of dependencies you have to provide your own solution to propagate the operation_id or your own contextual identifier like customer id. You can do this by creating a TelemetryInitializer to add your custom id or to grab the web request id and pass it along to the other services, example here.

OMS Integration

You can view and query your Application Insights telemetry in OMS by installing the OMS Application Insights solution from the OMS portal and configuring which applications you want to connect from Application Insights.

Application Insights Customisation And OMS Integration

You can connect multiple applications from different subscriptions which makes it easy to create a consolidated view. You can correlate Application Insights telemetry with other data sources in OMS like infrastructure telemetry making it easier to pinpoint the cause of slow application response.

VSTS Integration

You can install Application Insight widgets on your VSTS dashboards to surface Application Insight data in VSTS, link to marketplace.

PowerBI Integration

There is a content pack to create PowerBI dashboards with telemetry from Application Insights, link here. The content pack comes with a set of built-in dashboards. If you need a custom dashboard you can export Analytics Queries to PowerBI as explained here.

Custom Telemetry

Contextual logging and telemetry that makes sense in a business context for instance orders completed per second or aborted shopping carts can be a very powerful tool to get useful information out of logs and to track problems related to a specific customer interaction. To achieve this you can add your own telemetry to Application Insights by writing custom events and logging exceptions, link here. You can also have your favorite logging library writing to Application Insights, examples here.

Francois Delport

Adding Application Insights To Existing Web And Service Applications

In this post I’m going to take a quick look at adding Application Insights to existing web and service applications. I’m going to highlight a few things I discovered and explore the difference between the two approaches.

Adding Application Insights During Development Or Post Deployment

There are two ways to add Application Insights to your application:

  • During development you can use the Application Insight SDK and wizard in Visual Studio to add it to your web application.
  • After deployment you can enable Application Insights using the Status Monitor application or PowerShell.

The web application I’m working on is tied to Visual Studio 2012 and Visual Studio 2013 or higher is required for the wizard in Visual Studio, hence the investigation to add Application Insights to deployed applications. Enabling Application Insights post deploy can also be handy for web applications where you don’t have access to the source or you are not allowed to make any changes. I followed the instructions here.

Some Difference Between The Two Methods

  • When you add Application Insights to your project using the wizard in Visual Studio it will install additional nuget packages, references and configuration settings to send telemetry data to Azure. For deployed applications you have to install the StatusMonitor application to configure Application Insights or use PowerShell. Application Insights will keep it’s configuration in an ApplicationInsights.config file in your web app root folder and some nuget packages in your App_Data folder and additional dlls in the bin folder.

Adding Application Insights To Existing Web And Sevice Applications

The configuration file, dlls and packages can be  be lost when you redeploy.

Side Note: You can use Application Insights locally to view telemetry data in Visual Studio without configuring Application Insights in Azure or sending any telemetry data to Azure.

  • Application Insights can monitor client side events if you add some Javascript snippets to your pages, it can provide more detailed telemetry and you can create custom telemetry entries using the SDK. For deployed applications you only get the telemetry that IIS and .NET provides by default.

How To Add Application Insights Post Deployment Using PowerShell

Obviously loosing your Application Insights configuration after deploying is not an ideal situation, luckily you can easily script the configuration using PowerShell.

Import-Module 'C:\Program Files\Microsoft Application Insights\Status Monitor\PowerShell\Microsoft.Diagnostics.Agent.StatusMonitor.PowerShell.dll'
#optional
Update-ApplicationInsightsVersion    

Start-ApplicationInsightsMonitoring -Name appName -InstrumentationKey {YourInstrumentationKeyFoundInTheAzurePortal}

#optional
Get-ApplicationInsightsMonitoringStatus

The Update command is optional and updates the version of Application Insights installed before adding it to the application. The Get command is also optional making it easy to see the status information in your VSTS deployment log.

Adding Application Insights To Service Or Desktop Applications

For non-web applications it can’t be done after deployment, you have to add the Microsoft.ApplicationInsights.WindowsServer nuget package during development and configure the InstumentationKey in your code, instructions here.

Side Note: Application Insights is billed by usage and the first GB of telemetry data is free, making it easy to experiment or even use on small applications permanently. You can use sampling to limit the telemetry data and configure a daily limit in the portal to make sure you stay in the free tier.

Francois Delport

Pester Testing And PowerShell Development Workflow

In this post I’m going to have a look at Pester testing and PowerShell development workflow. I’ll be using Visual Studio and Team Foundation Server/Visual Studio Team Services to keep it simple. What I’m going to show won’t be news for developers but this can help to put testing in perspective for non developers.

Developer WorkFlow
Firstly lets have a look at the typical points during development when you want to run your tests and then we will dive into the details of getting it to work.

  • Running tests locally while developing.
    This depends on the development methodology you follow.

    • If you practice TDD you will be writing unit tests as you develop, a few lines at a time.
    • Depending on the size of the script most other developers will be writing a function and then a unit test for it or the whole script and then multiple unit tests. Remember it is better to write the tests while the code is still fresh in your mind.These scenarios work seamlessly if you use Visual Studio with PowerShell tools, Visual Studio will discover your Pester tests and show them in the test explorer.

Side Note: You can also use the tests to make development easier by mocking out infrastructure that is tricky to test like deleting user accounts in AD or deleting VMs.

  • Running tests with a gated checkin. In this case you want VSTS/TFS to run the build and test suite before the code is committed to the main code repository. VSTS/TFS  supports check in policies and one of them requires the build to pass before it is committed.
  • Running the tests after checkin as part of the build process. This is how most developers will run it and although VSTS/TFS doesn’t have native support for Pester tests you can run the tests using a Powershell script. Usually the whole test suite is executed at this point and can include integration tests.

For the last two scenarios to work you have to be able to install the Pester and PSCodeAnalyzer modules as well as any other modules required for your script on the build server. If you are using VSTS hosted build agents this is a bit of a problem since you need administrator access to install modules in the default locations for PowerShell. You can however manually import a module from a local folder during the build process. You can include the modules in you project or store them in a central location or repo and share them between your scripts. Some modules can even be installed from the Marketplace as a build step. If you are hosting your own Visual Studio Agents or using TFS this is not  a problem, you can install them as part of your build server or on demand.

Running A Pester Test
To execute Pester tests you have run a PowerShell script that will call Invoke-Pester. By default Pester will look for all scripts called *.Test.ps1 recursively but you can specify the script to test and even the specific functions in a script. Read the full documentation here.

Retrieving Test Results
Pester can output the test results in NUnit XML consult the documentation for the syntax and output options. VSTS/TFS can display this format on the test results tab. It doesn’t seem to do the same for the CodeCoverage results but you can still attach the output of the code coverage to your build artifacts by using VSTS/TFS commands.

Alternatively you can roll your own test results by passing the -PassThru parameter to Invoke-Pester, this will return the results as a typed object. You can use this to transform the results in any format you want.

Francois Delport

Visual Studio Team Services Environments And Machines

In my previous post I showed a very simple example using Visual Studio Team Services Release Management albeit to the default environment. In this post I’m going to dive deeper into the concept of Visual Studio Team Services environments and machines.

Environments
When you think of the term environment in the context of deploying software the first thought is usually one or more servers where you deploy your application to. But in VSTS environments are something different and this is bit confusing in the beginning. In VSTS environments are a logical container for a set of variables, approvers, tasks and agent queues etc. I think the most important point is your set of deployment tasks are defined per environment. You have to put some thought into the design of your deployment process to balance the trade off between the re-use of assets and the ease of making changes per environment.

If I contrast the VSTS approach with Octopus Deploy where you have one deployment process, you could craft some control logic using variables, environments and life cycles but it could become very messy and difficult to follow. Changes to your deployment process would impact all the environments using that project making it more cumbersome when you have changes for instance to your dev environment only. This is often the case since you would introduce functionality incrementally per environment from dev to production.

In VSTS you can change the deployment tasks for one environment without affecting other environments but if you have the same change in all environments at once you have to repeat the change for each environment. From experience this should not happen very often since you do it incrementally from dev to production. It will be interesting to see how this plays out in the long run

Also keep in mind how you structure your deployment scripts, ideally you want to have the same script for all your environments but with different parameters passed in for different environments. If you have different logic for each environment it will become a maintenance burden and also leave you open to human error since you will be crafting a script for production that was never tested in dev or qa, missing the whole point of continuous deployment. You can’t  completely escape errors but making changes to production by adding new variables is less error prone than running a new script.

Where do I specify my servers
This leads to the obvious question, where do I specify which servers to deploy to if that is not in my environment. Some deployment tasks have a section for machines where you can provide a comma separated list of machine names or IP address, a variable that will contain the names or a machine group name.

Visual Studio Team Services Environments And Machines

A machine group is a collection of machines, you could make one called Dev for instance and add all you development machines to it. To create machine groups open the Test menu in VSTS and then the Machines sub menu. The machines must have WinRM configured and accessible.

Visual Studio Team Services Environments And Machines

You can also use output variables from steps that create new VMs like the Azure Resource Group Deployment Task and pass those to subsequent steps. Some tasks like Command Line or Batch Script doesn’t have the ability to specify a machine name and they will execute on the server where the agent is hosted. If you really have to use these steps instead of remote PowerShell you can control the agents that are selected using agent queues that are configured at the environment level and agent capabilities that are configured at the agent level.

Links:
VSTS Deployment Guide

Francois Delport

Visual Studio Team Services Private Build Agents

In this post I’m going to show you how to use Visual Studio Team Services private build agents. Visual Studio Team Services (VSTS) supports two types of build agents, hosted and private. Hosted agents run on Microsoft servers in the cloud and private agents are hosted by you. When you create your VSTS account you will see two agent pools, Hosted containing your default hosted agent and Default which will be empty. I’ll be adding my private build agent to the Default pool for this demo.

Visual Studio Team Services Private Build Agents

Hosted agents make it easy to get started and work great for building most applications but there are limitations since you have no control over the build server environment, hosted agents can also be used to deploy to Azure. If the VSTS build environment doesn’t meet the requirements for your application build you can host your own build agents in your own environment. These locally hosted agents can also be used to deploy your application locally.

To get started click on the Download agent link in your agent pools control panel, unzip the file and run ConfigureAgent.cmd. Fill in the prompts, they are pretty self explanatory but keep in mind depending on your application your build can get very big and I/O intensive, it could be worth it to put your working directory on a separate drive. If you get stuck the documentation is over here.

If the installation was successful you will see the newly installed agent in your pool.

Visual Studio Team Services Private Build Agents

Note: Initially I installed the agent to run as a service using the default local service account and it worked for most of my applications but I had one build that required running as a specific user account. As per the documentation I used the “C:\Agent\Agent\VsoAgent.exe /ChangeWindowsServiceAccount” command from an elevated command prompt to change the service account but that didn’t work. The service didn’t update with the new credential and the agent showed as off line in VSTS.

To fix the problem I had to run “C:\Agent\ConfigureAgent.cmd” again specifying the new account name and then it worked.

Next step is to configure certain builds to use my on-premise agent by default since the build won’t work using the hosted agent. The simplest but least dynamic way is the set the build to only use agents from a specific pool. In this example I set the build to use the Default pool.

Visual Studio Team Services Private Build Agents

You can for instance limit builds that take very long to a certain pool so they won’t prevent other applications from building. Depending on the complexity of your environment and projects it would be better to use demands and capabilities as well. On your agent there is a list of capabilities and you can add custom ones, in this case I called the capability OnPrem.

Visual Studio Team Services Private Build Agents

In my build definition I can now specify the agent to be used for building must meet this demand.

AgentDemand

Now it will choose an agent from the pool that satisfies the demand. If you create a rule that cannot be satisfied, you’ll get this error message to warn you or else your build would just be stuck.Visual Studio Team Services Private Build Agents

Free VSTS accounts include one free private agent and charge $15 per agent there after. Even if the hosted agent is able to build your application look into the private agents, depending on the machine hosting the agent, your build can be a lot faster.

Francois Delport

Using SSH Between FishEye And Your BitBucket Server

In this post I’m going to cover using SSH between FishEye and your BitBucket server. When you configure BitBucket Server you have the option to enable SSH and HTTPS connections. Although you can use BitBucket without SSH there are scenarios where it is better to use SSH, one of them is connecting BitBucket and FishEye.

If you use HTTPS only connections with FishEye you will experience the following problems.

  • You won’t see the repositories it discovered automatically in the BitBucket Repositories Tab. When you add an Application Link to BitBucket and enable SSH it will automatically scan the repositories and show them here. Technically you can live without this functionality and manually add the repositories using Native Repository access but that is more involved.Using SSH Between FishEye And Your BitBucket Server
  • If you add a repository link using HTTPS the user name and password is stored in plain text in the config.xml file of your FishEye instance. If you use SSH only the name of the key is stored.

Security
I was pleasantly surprised to find out an SSH server is already bundled with BitBucket, and if you have an existing SSH service already running, this one should not interfere with it. I was also weary to open up even more ports on our servers for security reasons but it looks like bundled SSH server is locked down pretty well, you can’t use it to execute arbitrary SSH commands and it is not open to existing users on the system. You can read more here in the official documentation.

Keys
Generating the keys are done automatically if you have the application link between FishEye and BitBucket configured. When you see the repository in the BitBucket Server repositories list, click on the Add button. The repository will now show Added next to its name and it will also appear in the Native repository access list.  You can confirm this by clicking on the repository name in FishEye and in BitBucket by clicking on the cog icon in the repository.

Using SSH Between FishEye And Your BitBucket Server
FishEye
Using SSH Between FishEye And Your BitBucket Server
BitBucket

 

NOTE: Make sure you choose the correct option when you install GIT on your FishEye server and confirm that you can run ssh.exe and git.exe from the command prompt. If it doesn’t work check your PATH variable and try restarting the FishEye service to pick up the changed PATH. You can specify the path to git.exe in FishEye but not ssh.exe, it must be able to get to it from the PATH.

Using SSH Between FishEye And Your BitBucket Server

If this isn’t configured properly you will receive errors in FishEye that it can’t find the ssh executable.

Francois Delport

Managing Git In Visual Studio 2015

In this last post about Git I’m going to touch on a few odds and ends I came across that are worth mentioning or confused me at first.

Working Directories
Git does not create folders in your local repository for every branch you checkout like TFSVC or SVN does. There’s only one working directory per repository and in that directory is a hidden .git folder where the files are stored.
GitFolder

When you change your active branch by selecting a different branch in Team Explorer, Git retrieves the files for branch into your working folder. If you have changes that were not committed to the local branch Visual Studio will warn you so you don’t loose your work when you switch branches.

Multiple Repositories
When you clone a repository you get everything in the history of that repository and I mean everything, every version of all the file changes that happened since the repository was initialised. This is can be handy since any client repository can be used to recover the server repository should anything happen but it also means the repository can get really big over time.

To keep it manageable you should create multiple repositories in your team project and keep solutions that change together or have a logical reason to be together in a repository.

Keep in mind if you have projects that change together spread across multiple repositories it can add to your headaches, merging and branching and tracking the versions across multiple repositories can be difficult. There are also some practical implications like searching across multiple repositories is not supported by all Git tools.

To create a new repository, open up TFS portal and browse to your team project and click on the Code tab then click on the drop-down arrow next to your current repository name and select Manage Repositories… and click on New Repository.. give the new repository a meaningful name and click on OK.

ManageRepos

AddRepo

Back in Visual Studio, open Team Explorer and click on Manage Connections to see the new repository, right click on it and connect to it.

ManageConnection

Visual Studio will prompt you to clone the repository to start using it.

Merge conflicts in pull requests
In my previous post I discussed pull requests now I’ll briefly show you how to handle merge conflicts. When you submit a pull request to merge your code back into the master branch and there are other changes that can not be automatically merged by Git you will see this error message.

MergeConflict

TFS actually makes it very easy for you to fix it, when you click on the Manual merge help… link under the merge failed message it will show exactly what to do.

MergeHelp

After you do a pull to get the latest changes to your local branches you have to merge master into BranchOfMaster. First make sure your branch is the active branch in Git, then right click on it and select Merge From…, we want to merge from master into this branch.

StartMerge

When Team Explorer shows the Resolve Conflicts section click on the Merge button and you will see the merge editor.
DoMerge

In this case I am going to select both changes and click on the Accept Merge button. Still under the Resolve Conflict section of Team Explorer click on Commit Merge. If you now look at the pull request screen in TFS portal you will see there are no more merge conflicts and you can complete the pull request.

FinishMerge

Remember to pull down the changes to your local master since the pull request only updated the remote/origin master. I assumed when I right click on my remote/origin branches and view the history it would be real time but it seems you have to do a pull to view the latest history as well.

Francois Delport

Git Branching In Visual Studio 2015

There are many different ways to use Git or Git workflows, you can even use it like a central repository similar to the way you would use SVN or Team Foundation Version Control but most of the popular and widely used workflows involve branching.

In this post I am going to give a quick demo on how to create a new branch from a remote master in Visual Studio 2015 and submit a pull request to merge it back to the remote master branch. This post is for absolute beginners to Git and follows on the previous post that showed how to create your repository.

In your solution open up Team Explorer and the Branches section. Right click on the master branch in the remote/origin server and click on Create new local branch. Make sure that Track remote branch is NOT selected and give the branch a meaningful name, click Create branch.

CreateBranch

If you look at the Branches section you will see the new FixABug branch created in your local repository but it is not on the server yet. To push the branch to the server, right click on the FixABug branch and click on Publish Branch. If you now look at the remote/origin repository you will see the new branch on the server.

Make some changes to your solution and commit them by opening the Changes section in Team Explorer, give the commit a proper commit message and click on Commit. These changes are now committed to your local repository, sync them to your bug fix branch on the server my doing a sync.

Sync

I did the commit and sync in two separate steps to drive the point home of local repositories but you can use Commit and Sync to do both at once. Next we are going to merge the changes from the FixABug branch that is now on the server with the master branch on the server.

Now we are going to create the pull request. In the Branches section make sure the remote FixABug branch is selected, right click on it and select Create Pull Request and watch the seamless integration between GIT and Visual Studio fall apart as you are taken to the TFS web portal to create the pull request 🙂

CreatePull

In this case I am the same person making the pull request and allowing it but usually the person allowing it will be a senior member of the team that will review the change before allowing it. You can add reviewers to this pull request using the reviewers drop down on the right. I’m going to add myself to the reviewers list.

Review

If I now look under pull requests assigned to me I will see this one waiting. As the only reviewer I am going to approve this pull request.

Approve

Now I will complete the pull request and merge the changes into master by clicking on the Complete pull request button. You will have the option to FixABug delete the branch which can be handy if this bug was fixed and the branch will not be needed anymore. If you now look at the master branch on the server you will see your changes.

This was obviously the happy path, in reality there can be merge conflicts and reviewers can ask for changes and pull requests can be abandoned but I will look at these scenarios in a later post.

Francois Delport

Introduction To Git In Team Foundation Server 2015 And Visual Studio 2015

This is short into to using Git in Visual Studio 2015 and Team Foundation Server 2015 it is aimed at someone completely new to Git, like me:)

The most important concept to understand about Git is the fact that it is distributed and works on branches. You will be downloading the code from the remote/origin repository to a local repository on your dev environment. Changes you make to the code will first be committed to this local repository. When you are ready to send your changes up to the remote/origin repository you will first pull down changes made by other users on the remote/origin repository and merge them into your local branch to fix merge conflicts and then you will commit the changes to the remote/origin repository.

Git support is now baked into Visual Studio when you install it remember to select Custom installation and select Git tools as part of your installation.

Installl GIT Tools

Once installed open Team Explorer, in my case I didn’t have any other TFS connections setup. Click on Manage Connections -> Connect to Team Project and add a new TFS server connection.

CreateConnection

Once created, select the server from the list of connections to connect to it. You can ignore the message to configure a workspace, GIT does not use one. Click on the drop down arrow and select Projects and My Teams -> New Team Project… to add a new team project.

AddNewTeamProject

Give your project a meaning full name in this case ‘Trying Out GIT’ and select the process template of your choice.

SelectGIT

Remember to choose Git as your version control system. After a few seconds you will get a success confirmation. You will see the new team project in your list of projects in Team Explorer, double click on it to connect to the project.

Even though the remote repository is empty you still have to clone it your local repository to start working, you will see a message to clone the repository to start working on it, click on Clone and choose a folder for your local repository.

CloneRepo

GIT will download a copy of the empty remote repository to your dev environment. When you make changes and commit them it will first be into this local repository. You will see later how the committed changes are sent to the remote repository on the server.

Since this an empty repository I’m going to add some code to it, click on the New.. button under Solutions to add a new solution to this repository. For this demo I am adding a new ASP.NET MVC application but the type of project is irrelevant. In the team explorer click on Changes and you will see the changes from the solution you created, note the branch is master since you just created it from scratch.

Click on Commit and you will see a successful commit message, each commit gets a SHA-1 hash to uniquely identify it and its changes. This commit was in your local repository, you still have to push it to the server. If you opened TFS web portal and went to the code windows you will see no code in the repository. Click on Sync and you will see the synchronization screen.

Sync

There is quite a bit to explain here:
Sync: Send your local changes to the server and pull down changes from the server and merge them into your local repository.
Fetch: Pull down changes from the server but do not merge them into your local repository, you have to merge them yourself.
Pull: Pull down changes from the server and merge them into your local repository, do not send local changes to the server.
Push: Commit your changes to the server, you may get merge conflicts.

You can see the local outgoing commit waiting to be pushed to the server, press Push to commit the changes to the server. In this case we can safely push since this is the initial commit into master. Normally your changes would be made in a branch and you would submit a pull request to pull your changes into the master branch. The pull request will be reviewed by the person in charge of the master branch and then accepted or declined. I’ll show pull requests and code reviews in my next post.

When push is done select Branches from the team menu to see your master branch on the server.

Origin

Another term you will come across is origin, it means the remote server where the master is. In Visual Studio it will show as remote/origin in other Git tools it will usually just show origin.

Francois Delport