Quantcast
Channel: Alexey Totin – .NET Tools Blog
Viewing all 58 articles
Browse latest View live

ReSharper Interactive Tutorials get an update

$
0
0

As you may remember, along with ReSharper 2016.3 we released a special plugin entitled “ReSharper Tutorials.” In brief, it’s a plugin with a set of interactive ReSharper tutorials. Here’s how it works:

  • When you launch a tutorial, a sample solution is loaded.
  • The tutorial guides you through a number of steps, each illustrating a particular feature. The plugin automatically checks whether you have performed the actions described at each step.
    ReShareper Tutorials. Automatic Steps

As we released ReSharper 2017.1, the plugin also got updated to version 0.9.9. What’s in this update?

  • A new “What’s New in ReSharper 2017.1” tutorial. It will guide you through the main ReSharper features for C# brought by v2017.1.
  • User interface improvements that include a new UI theme and an ability to go to the next step by pressing Tab instead of clicking the Next Step button (in cases where a step doesn’t imply automatic control of user actions.)

A short reminder on how to install and run the plugin:

  1. Go to ReSharper | Extension Manager, search for “Tutorials” and install the plugin. The latest stable version is 0.9.9. Don’t forget that ReSharper plugins are version-dependent, meaning you can install and run ReSharper Tutorials 0.9.9 ONLY on ReSharper 2017.1.
    ReSharper Tutorials in Extension Manager
  2. Run the plugin via ReSharper | Tutorials….
    Resharper Tutorials menu
  3. Run the desired tutorial with Run Tutorial.
    ReSharper Tutorials Home page

We hope you’ll find time to give the updated plugin a try as it’s probably the easiest way to find out what’s new in ReSharper 2017.1, as well as to learn essential ReSharper stuff. As they say, “a little knowledge with some practice is better than much knowledge with no practice.”

The post ReSharper Interactive Tutorials get an update appeared first on .NET Tools Blog.


Colored background highlighting in dotCover 2017.1

$
0
0

Right after dotCover 2016.3 introduced a new way to highlight code coverage (markers in the gutter instead of colored backgrounds), we immediately got a flurry of “Bring it back!” comments. Indeed, there is a range of tasks where the “old-style” highlighting could be more useful, e.g. when you need to quickly evaluate uncovered parts of code.

Some technological limitations prevented us from keeping the colored-background highlighting along with the new markers in 2016.3. Fortunately, now all the limitations are left behind, and dotCover 2017.1 gets the old highlighting back. And it’s even better than before!

The main improvement is that now background highlighting is able to show test results. To switch between highlighting types, use the option Highlight code coverage using in ReSharper | Options… | dotCover | Highlighting.

Here’s a short summary of how it works:

dotCover 2017.1 line background highlighting

dotCover 2017.1 markers highlighting

Finally, you can combine both highlighting styles by selecting the option Both markers and line background. Why does that make sense? Because this way, you can use one of the markers’ advantages – the ability to see and navigate to covering tests in one click.

dotCover 2017.1 markers and background highilighting

Note that if you want to get the “plain vanilla” highlighting from dotCover 2016.2 (where test results were not taken into account), simply switch off the option Use highlighting to show unit test results.

dotCover 2017.1 old highlighting

As always, we invite you to download the latest ReSharper Ultimate and try the “new old” highlighting in practice.

The post Colored background highlighting in dotCover 2017.1 appeared first on .NET Tools Blog.

dotMemory Command Line Tools

$
0
0

In the 2017.1 release, dotMemory introduced a console profiler. Now, using the dotMemory.exe tool, you can perform memory profiling from the command line. Why would you? The short answer would be to automate the process of gathering memory snapshots. There are lots of possible use cases:

  • You want to speed up profiling routines, e.g. when you regularly profile the same application and do not want to start the dotMemory user interface each time.
  • You need to profile your application on someone’s computer you don’t have access to (e.g., your client’s) but don’t want to bother him with dotMemory installation and detailed profiling instructions.
  • Or maybe you want to include memory profiling into your continuous integration builds (though our dotMemory Unit framework could be much handier for this purpose).

Where can I get it?

dotMemory.exe is distributed separately from dotMemory. You can download the tool on the dotMemory download page. Note that dotMemory.exe is free and does not require you to have the full dotMemory installed.

Now let’s take a look at the most common usage scenarios.

Instantly get a snapshot

The most popular scenario is probably getting a snapshot of an already running application.

dotMemory.exe get-snapshot 1234 --save-to-dir=C:\Snapshots

1234 here is the process ID. Right after you run the command, dotMemory will attach to the process, take a snapshot, save it to C:\Snapshots, and detach from the process.
You can also specify the profiled application with its process name:

dotMemory.exe get-snapshot MyApp --with-max-mem

or

dotMemory.exe get-snapshot MyApp --all

--all and --with-max-mem options let you avoid ambiguity when multiple processes with the same name are running:

  • --with-max-mem – a process that consumes most of the memory will be profiled.
  • --all – all processes with the specified name will be profiled. dotMemory will take snapshots of all processes (snapshot per process).

Get snapshots by condition

Sometimes you may need to track application memory consumption during a long time interval. In this case, you can start your application under profiling and get snapshots only in case a particular condition is satisfied:

  • a periodic time interval ends (in the example, it’s 30 s):

    dotMemory.exe start --trigger-timer=30s C:\MyApp\MyApp.exe MyAppArg1

  • or memory consumption increases by a specified value (50% in the example below):

    dotMemory.exe start --trigger-mem-inc=50% --trigger-delay=5s C:\MyApp\MyApp.exe

    (--trigger-delay=5s here stands for the 5s delay required to skip application startup phase)

Note that in both examples, we use the start command to start the application. If you want to profile a ASP.NET application, you should use the start-iis command. In this case, IIS and all its application pools will be started under profiling. E.g.:

dotMemory.exe start-iis --trigger-timer=30s --open-url=localhost/myapp --use-browser=Chrome

What if the app you want to profile is already running but you still want to use triggers? Simply use the attach command:

dotMemory attach MyApp.exe --trigger-timer=30s

Get snapshots using stdin messages

If you want to take direct control over the profiling process (i.e., get snapshots at some exact moment), you can do this by sending messages to stdin of dotMemory.exe:

  • Get a snapshot:

    ##dotMemory["get-snapshot", {pid:1234}]

If pid is specified, dotMemory will take a snapshot of the process with the specified PID. Otherwise, dotMemory will take snapshots of all profiled processes.

  • Stop profiling and kill the profiled application:

    ##dotMemory["disconnect"]

These stdin messages work for all profiling sessions started with start, start-iis, or attach commands.

Moreover, if you want to write a profiling script, you may find it useful that dotMemory.exe is able to send service messages to stdout:

  • Start of the profiling session:

    ##dotMemory["connected", {pid: 1234}]

  • Saving the snapshot:

    ##dotMemory["workspace-saved", {path: "..."}]

Note that messages sent to stdin must always start from a new line and end with a carriage return. Both stdin and stdout messages have the format of a JSON array.

Get snapshots using API

Of course, we don’t forget about our profiling API. If you control profiling directly from your code using the dotMemory API, run dotMemory.exe with the -–use-api command. E.g.:

dotMemory.exe start --use-api C:\MyApp\MyApp.exe

We hope you’ll find the dotMemory console profiler useful and helpful in automating your profiling routines. As usual, we invite you to download the tool, try it on your own and share your experience.

The post dotMemory Command Line Tools appeared first on .NET Tools Blog.

Importing raw memory dumps in dotMemory 2017.2

$
0
0

Support for raw memory dumps was probably the most voted and long-awaited dotMemory feature. Finally, it’s available in dotMemory 2017.2!

Indeed, there are cases when it’s impossible to profile a problematic application locally or remotely and take a regular dotMemory snapshot for analysis (e.g., because of security policies). Your last resort in such a case is typically a raw Windows memory dump. It can be taken with a number of tools, with the two most popular being Task Manager (comes with the operating system) and Process Explorer. Now, all you have to do is simply copy the dump to your computer and open it in dotMemory using the Import Dump command.

Importing memory dumps in dotMemory

That’s it! The dump is converted to an ordinary dotMemory snapshot, so you can analyze it using all of the sophisticated dotMemory features like automatic inspections, retention path diagrams, etc.

Important notes

  • The feature is currently in Beta status. While it’s 100% functional, the number of possible combinations of Windows and .NET Framework versions is really huge! This means it’s possible that for some combinations, dotMemory won’t show you all of the expected data in the resulting snapshot. Please send us your process dumps should you face this issue!
  • When creating a dump of a 32-bit application with Task Manager, make sure you use a 32-bit version of the tool. You can find it in C:\Windows\SysWOW64\taskmgr.exe.
  • We have tons of ideas on how to improve the feature (dumps contain much more data that we’re currently able to analyze), so this is still work in progress.

If you feel like trying importing dumps right now, download and install ReSharper Ultimate. Please ask your questions by posting comments to this post.

The post Importing raw memory dumps in dotMemory 2017.2 appeared first on .NET Tools Blog.

Unit test coverage and continuous testing. Now in Rider!

$
0
0

Rider+dotCover

With each Rider release, we do our best to bridge the gap between Rider and the ReSharper Ultimate bundle. The top in-demand feature has certainly been “Rider + dotCover” integration. So, without further ado, please welcome the latest Rider 2018.2 EAP build – the first Rider version ever that features unit test code coverage and support for continuous testing!

What exactly is dotCover able to do in Rider, and how is it going to be licensed? Read the answers below.

What operating systems are supported?

Currently, only Windows is supported. Support for Mono is still a work in progress and not going to be included in 2018.2.

How is it installed?

dotCover is provided as a bundled plugin for Rider, and installed along with Rider automatically. No additional actions needed! If for some reason you want to disable dotCover, you can do this via Rider’s Plugins settings:

dotCover in Rider. Plugin settings

How will it be licensed?

We don’t want to reinvent the wheel and confuse anyone with additional licenses. Since dotCover is normally part of ReSharper Ultimate, all coverage functionality in Rider will be available only for the ReSharper Ultimate + Rider bundle.

What features are already available?

All critical functionality is already here and available starting with the latest EAP. First, it’s “classic” unit test code coverage analysis using Coverage Tree and code highlighting. Everything looks and feels exactly the same as in Visual Studio with ReSharper Ultimate:
dotCover in Rider. Unit tests code coverage

Continuous testing is also here and has no differences compared to ReSharper Ultimate. Simply enable it for the desired session, change the code and build or save the project (depending on the preferences).

dotCover in Rider. Continuous testing

Note that the continuous testing trigger (build or save) can be set in Unit Testing settings:

dotCover in Rider. Continuous testing settings

If you’re new to the continuous testing feature, find more details in this post.

What features are NOT yet available?

The following dotCover functionality is not present in 2018.2 (but will definitely be in the scope of future releases):

  • Code coverage of applications
  • Coverage filters
  • Coverage reports (XML, HTML, etc.)
  • Document summary
  • Hot Spots view
  • Continuous testing indicator
  • Opening coverage snapshots
  • Search in Coverage Tree

It’s still possible that some of these will make it into the 2018.2 release, but not likely.

You’re welcome to download the latest Rider 2018.2 EAP and check its coverage capabilities by yourself. And of course, we would be glad to hear your opinion in the comments below!

The post Unit test coverage and continuous testing. Now in Rider! appeared first on .NET Tools Blog.

> dotnet dotсover test

$
0
0

If you’ve got the idea of this post just by reading the title, you may skip the next paragraph and go right to the procedure.

We’re going to talk about the dotCover.exe console runner and a new way to run coverage analysis of unit tests. While the dotnet tool simplified running tests a long time ago (dotnet test in the working directory is enough), dotCover.exe still required you to specify a lot of arguments in order to run tests, like an absolute path to the dotnet.exe, path to a .dll with tests, and others. Too much to type, and moreover, there’s no guarantee the paths won’t change at some point. In 2018.2, we came up with a solution: the “dotnet” way to run tests under coverage analysis. All you need is to insert the dotсover argument to the original string:
dotnet dotсover test

So, this is how you can make it work (steps 1-4 are made only once per project):

  1. Go to nuget.org and find the dotCover.CommandLineTools package.
  2. Do NOT reference this package in your unit tests project. Instead, open its .csproj file and add the following line that contains the package name and the current version (2018.2 EAP03 or later):
    <DotNetCliToolReference Include="JetBrains.dotCover.CommandLineTools" Version="2018.2.0-eap03" />
  3. In the command line, go to the directory containing your unit tests project.
  4. Run dotnet restore
    This will download dotCover command line tools to your computer.
  5. Run tests with coverage analysis:
    dotnet dotсover test

Important notes:

  1. The suggested way of using the console runner is NOT a replacement but an addition to the good old dotCover.exe. You can still use it to analyze coverage if you want to.
  2. If you want to specify any argument you’ve used previously with dotCover.exe, you can do this by simply adding the dc prefix to the argument name. E.g., to specify a report type, use --dcReportType instead of --ReportType:
    dotnet dotсover test --dcReportType=XML
  3. You don’t have to specify whether you want to cover (a coverage snapshot is saved) or analyze (a human-readable report is saved). This is one more improvement in this release that also applies to dotCover.exe. Now, the --ReportType / --dcReportType is the only argument you should use: if it is specified, you’ll get a report of a certain type; if not, a regular coverage snapshot will be saved.
  4. If you configured dotCover.exe via an XML file and want to continue using it, simply specify a path to the file
    dotnet dotсover test --dcXML="C:\config\config.xml"
    All parameters in the XML file that are not applicable to the new runner will be ignored.

One more important thing: the updated runner doesn’t require any additional workarounds for getting coverage of long-running tests (> 100 ms).

All in all, dotnet dotсover test is the fastest and easiest way to analyze tests coverage from the command line. It’s already available in ReSharper Ultimate 2018.2 EAP, so why not check it out right now?

The post > dotnet dotсover test appeared first on .NET Tools Blog.

How does my app allocate to LOH? Find out with dotMemory 2018.2!

$
0
0

If you’re a long-term dotMemory user, you may have noticed the absence of big new features in dotMemory ever since we added support for memory dumps in 2017.2. Rest assured this is not because we’ve lost interest in memory profiling. The opposite is true: our team has been investing even more effort in this area, but most of the current changes are happening under the profiler’s hood.

One such changes is the way dotMemory collects and processes data in real time. This may lead to many new features in the future, but as of now, it has led to a couple of neat improvements. Both of them are related to the real-time memory usage diagram (the timeline) that you see during profiling.

Improved timeline in dotMemory 2018.2

More precise data and support for all app types

First of all, we’ve stopped using Windows performance counters as a data provider and switched to Microsoft Profiling API. As a result, the timeline data are now more precise and do not differ from the data you see in a snapshot. But what is more important, the timeline is now available for all types of apps including .NET Core, ASP.NET Core, IIS-hosted web apps, Windows services, etc.

The “Allocated in LOH” chart

The second improvement is a new chart entitled Allocated in LOH since GC. It probably deserves a more detailed explanation.

A brief reminder on what LOH is and why is it important: Large Object Heap (or LOH for short) is a separate segment of the managed heap used to store large objects. When an object is larger than 85 KB, CLR doesn’t allocate it to the Gen 0 Heap, but to the LOH instead.

For the sake of performance, LOH is never compacted (though you can force the garbage collector to compact LOH in .NET Framework 4.5.1 and later). As a result, LOH becomes fragmented over time. Another problem is that CLR collects unused objects from LOH only during full garbage collections. Thus, LOH objects can take space even if they are no longer used. All this leads to your app having a larger memory footprint (on x86 systems this may even result in OOM exceptions).

Large Object Heap fragmentation

So, how does the improved timeline help you control LOH? In 2018.2, it shows you not only the size of LOH, but all allocations to LOH instantly as they happen. This helps understand when LOH allocations happen (on application startup, during some work, etc.) and how intense they are (e.g. you can have some significant LOH memory traffic that doesn’t change the LOH size).

dotCover in Rider. Continuous testing

On the GIF above, you see the Allocated in LOH chart (oblique hatching above the LOH size graph) of a simple application that constantly allocates large objects. Note that the chart shows you the size of objects that have been allocated in LOH since the last Garbage Collection. That’s why, after each GC, the graph restarts from zero.

Before you download the latest ReSharper Ultimate EAP and try the feature for yourself, we have one more small reminder about premature optimization. If you don’t see any evident problems with memory consumption, there’s nothing bad about your app allocating memory in LOH. In most cases, you can use the new Allocated in LOH chart simply to get a better understanding of how your app works.

The post How does my app allocate to LOH? Find out with dotMemory 2018.2! appeared first on .NET Tools Blog.

Performance profiling .NET code in Rider with integrated dotTrace

$
0
0

Rider 2018.2 was the first release to host one of our .NET tools, dotCover, together with its unit test coverage features. As we mentioned back then, this was just the beginning. Today, it’s performance profiling’s turn to be taken on board. We are proud of our first Rider release with an integrated performance profiler: JetBrains dotTrace is now part of the latest Rider 2018.3 EAP build!

In this introductory post, let’s take a look at the profiler’s capabilities, supported systems and frameworks, and licensing.

What operating systems and frameworks are supported?

As of EAP 03, we support the following operating systems and frameworks:

Operating systems:

  • Windows: get and analyze performance snapshots
  • Linux: not yet (planned)
  • macOS: not yet (planned)

Frameworks:

  • .NET Framework
  • .NET Core
  • Mono: not yet (planned)
  • Mono (Unity): not yet (planned)

All of the above is relevant only for EAP 03. It’s likely (but not yet 100% certain) that you will see support for Linux, macOS, Mono, and Mono for Unity (a bit less likely) in future EAP releases and in the final 2018.3.

How will it be licensed?

For now, we’re keeping the same policy for dotTrace as for the integrated dotCover tool. Using the profiler requires a license for either Rider + ReSharper Ultimate or the All Products Pack. Once you have it, the bundled dotTrace plugin is installed by default. Note that all EAP builds also have it.

Rider plugins settings

How to profile (what features are already in Rider)

As usual, we prefer to go step by step. In this first step, we’re giving you all the basic functionality: getting performance snapshots, analyzing the call tree, and hot spots. Let’s take a closer look!

Configuring a profiling session

Profiling sessions must be configured via Rider run configurations:

Rider. Configure a profiling session

  1. In the toolbar, select Edit Configurations… from the list of run configurations.
  2. In the opened Run/Debug Configurations window, select the Profiling Options tab.
  3. Specify the profiling type and other profiling options.

That’s it – you are ready to start profiling!

Starting a session and getting snapshots

  1. To start profiling a run configuration, either select Run | Run ‘config_name’ with Profiling in the main menu or click the corresponding button on the toolbar.
    Rider. Run profiling session
  2. Once the profiling starts, you will see the Performance Profiler tool window displayed on the Profiling tab, with the profiling controller inside. Reproduce the performance issue you’re looking to investigate, or get enough data on how your app works. Then, click Get Snapshot. The collected snapshot will be added to the list on the right. To start collecting data again, click Start Profiling.
  3. After you collect one or more snapshots, you may finish the profiling session. Normally, you do it either by closing the profiled application or by detaching the profiler via the Detach button. (Whereas Kill forcibly terminates the app and the session, so use it only in an emergency.)

Rider. Get profiling snapshot

Analyzing the snapshot

Rider. Analyze performance snapshot

  1. On the All Snapshots tab of the Performance Profiler tool window, select the snapshot you want to analyze.
  2. Analyze the collected data using one of the available views:
    1. Call Tree: a “classic” call tree that shows you all method calls in all threads. Each top-level node represents a top-level function which was executed by a certain thread. Use this view to quickly get down to actual application activity.
    2. Top Methods: the best place to start from when analyzing application performance. It is a plain list of methods with the highest execution time. Note that you can reduce the system functions “noise” by excluding them from the list using the Rider. Hide system functions in snapshot toggle. When the toggle is enabled, each method’s execution time is calculated as a sum of the method’s own time and the time of all child system methods (down to the next user method in the stack).

    IMPORTANT: dotTrace in Rider is able to take Timeline snapshots, but the integrated viewer will open them as regular (Sampling) snapshots. To get all of the benefits of Timeline profiling analysis (UI freezes, garbage collection, I/O operations, memory allocation, etc.), you should open the Timeline snapshots in the standalone version of dotTrace.

  3. Once the suspicious method is found, double-click it or press Enter. Rider will navigate you right to the method’s source code.

It’s also worth mentioning that the scope of Rider’s Search Everywhere feature (Ctrl+T in the Visual Studio layout) now includes the opened snapshot as well:

Rider. Find function in snapshot

Download the latest Rider 2018.3 EAP and check out its profiling capabilities for yourself. And of course, we would be glad to hear what you think of it in the comments below!

The post Performance profiling .NET code in Rider with integrated dotTrace appeared first on .NET Tools Blog.


Performance Profiling in Rider 2018.3. What’s New?

$
0
0

If you’re an active Rider 2018.3 user or just follow our blog, you probably know that Rider just got an integrated performance profiler based on JetBrains dotTrace. Though we’ve already reviewed the profiler features on the EAP stage, the release version brings some important changes, especially concerning profiling session configuration. Read this post to learn more about the changes.

Supported OSs and frameworks

Just to reiterate: in Rider 2018.3, the profiler supports only Windows + .NET Framework / .NET Core. We’re on the finish line with adding support for Mono and Mono Unity (as well as support for Linux and MacOS), but the feature still requires some polishing. It is highly likely that you will see it in early 2019.1 EAP builds.

What can you profile in Rider?

A quick reminder of what can be a profiling target:

  • NET / .NET Core standalone applications
  • ASP.NET / ASP.NET Core web applications (IIS Express only)
  • Arbitrary .NET / .NET Core processes
  • NUnit / xUnit / MSTest unit tests

In this post, we’ll take a quick look at how you can profile apps in Rider. For details on how to profile tests or arbitrary .NET processes, please refer to the Rider documentation. So, what has changed exactly in the release version of Rider 2018.3?

Configuring and starting a profiling session

Profiling configuration in Rider
Session configuration has been affected the most as it is no longer part of run configuration:

  • Profiling session configuration is a separate entity that is selected/edited right from the toolbar.
  • Profiling target is always an executable specified in the currently selected run configuration. For example, if you profile a web app on IIS Express, the profiling target is iisexpress.exe that runs this web app. Currently, you can profile the following run configuration types:
  • To start a session, simply select a profiling configuration and click the same control on the toolbar.

Profiling session configuration in Rider

Getting snapshots

This part hasn’t changed much since the EAP stage. It’s quite straightforward: after the profiling session starts, do what you need to do in your application, and then click Get Snapshot.
Get performance snapshot in Rider

Analyzing snapshots

Once you have a performance snapshot, use the Top Methods view and the Call Tree view to determine the cause of a performance bottleneck. The analysis workflow has changed somewhat after the EAP.

Combined Call Tree and Top Methods

First of all, the Top Methods and Call Tree views are now connected with each other. If Follow selection icon Follow the selection is turned on, Top Methods shows methods only for the subtree that is currently selected in Call Tree.
Analyzing profiling snapshots in Rider

Filters for Timeline snapshots

The second improvement is related to Timeline profiling. We’ve added more filters comparing to the EAP. Now, you can use four basic filters:

  • by thread,
  • by thread state,
  • by method, and
  • by subsystem.

Analyzing profiling snapshots in Rider. Using filters

Improved navigation

The navigation capabilities are also improved. You can navigate:

  • from a snapshot to code,
  • from code to a snapshot,
  • from Search Everywhere to a snapshot.

Analyzing profiling snapshots in Rider. Navigation

As always, we invite you to download the latest Rider to try its new profiling features in practice.

The post Performance Profiling in Rider 2018.3. What’s New? appeared first on .NET Tools Blog.

What Do These …+c… Classes Do in my Memory Snapshots?

$
0
0

There’s nothing we love as much as user feedback. It is a priceless source of insights into how people use tools like dotMemory, what gets them excited – and what gets them confused.

Over the last year we’ve received multiple questions from users seeing classes with ...+<>c... in their names. They said:

  • “…in the object browser we have some instances of an object that ends with +<>c. I can’t find any information what kind of objects this is and I hope that you can help me?”
  • “What does +<>c mean? I am new to dotMemory.”
  • “I am looking at some “Survived Objects” and I see a lot of “MyClass+<>c“.”
  • “dotMemory lists a lot of objects that look like “ClassName+<>c...” with no explanation.”

Let’s look at what these are!


In dotMemory, we may see the following:

How lambda looks like in a memory snapshot

So, what is the mysterious ...+<>c... class? The answer is quite simple. It’s a helper class the compiler creates to execute a lambda. Actually, we’ve already written about this here in a series of posts on how to fight performance issues in .NET. Let me try to sort out the details one more time, especially considering that there were some changes in the compiler since then.

In this example, we have some StringUtils class that has a Filter method used to apply filters on a list of strings:

public static class StringUtils
{
   public static List<string> Filter(List<string> source, Func<string, bool> condition)
   {
       var result = new List<string>();
      
       foreach (var item in source)
       {
           if (condition(item))
               result.Add(item);
       }

       return result;
   }
}

We want to use this method to filter out strings that are longer than 3 symbols. We will pass this filter to the condition argument using a lambda. Note that the resulting code (after compilation) will depend on whether the lambda contains a closure (a context that is passed to a lambda, e.g. a local variable declared in a method that calls the lambda).

Let’s take a look at the possible ways to call the Filter method: via a lambda without closure, via a lambda with closure, and via a method group.

A lambda with no closure

Example

First, let’s use a lambda without a closure. Here we pass the maximum string length (3) as a number, so no additional context is passed to the lambda:

public static class FilterTestNoClosure
{
   public static void FilterLongString()
   {
       var list = new List<string> {"abc", "abcd", "abcde"};
       var result = StringUtils.Filter(list, s => s.Length > 3);
       Console.WriteLine(result.Count);
   }
}

Decompiled code

If we decompile this code in dotPeek (our free decompiler), we’ll get the following:

public class FilterTestNoClosure
{
   public void FilterLongString()
   {
       List source = new List<string>();
       source.Add("abc");
       source.Add("abcd");
       source.Add("abcde");
       // ISSUE: method pointer
       Console.WriteLine(StringUtils.Filter(
          source, FilterTestNoClosure.<>c.<>9__0_0 ?? (
          FilterTestNoClosure.<>c.<>9__0_0 = new Func<string, bool>(
          (object) FilterTestNoClosure.<>c.<>9, __methodptr(b__0_0)))).Count);
   }

   [CompilerGenerated]
   [Serializable]
   private sealed class <>c
   {
       public static readonly FilterTestNoClosure.<>c <>9;
       public static Func<string, bool> <>9__0_0;

       static <>c()
       {
           FilterTestNoClosure.<>c.<>9 = new FilterTestNoClosure.<>c();
       }

       public <>c()
       {
           base..ctor();
       }

       internal bool b__0_0(string s)
       {
           return s.Length > 3;
       }
   }
}

As you can see, the compiler has created a separate <>c class with a static constructor, making the lambda a method in this class (b__0_0). The method is called via the <>9__0_0 static field. Note that this field is checked against null and created only once during the first call.

Console.WriteLine(StringUtils.Filter(source, FilterTestNoClosure.<>c.<>9__0_0 ?? (FilterTestNoClosure.<>c.<>9__0_0 = new Func<string, bool>((object) FilterTestNoClosure.<>c.<>9, __methodptr(b__0_0)))).Count);

You may wonder what advantages this approach may have. The answer is that it won’t generate any memory traffic. Even if you call FilterLongString ten thousand times, the lambda will be created just once.

What about dotMemory?

To ensure, let’s see how the following code looks in dotMemory:

public static void Main(string[] args)
{
   var filterTest = new FilterTestNoClosure();
   for (int i = 0; i < 10000; i++)
   {
      filterTest.FilterLongString();
   }

   Console.ReadLine(); // the snapshot is taken here
}

The Memory Traffic view in dotMemory will look as follows:

Memory traffic from a lambda without closure

As you can see, only one object FilterTestNoClosure+<>c is created. If we examine this instance using the Key Retention Paths view, we’ll see that it is retained via its static fields. Note that as any static members, these fields will remain in memory for the entire lifetime of the application. But typically, this is not a big deal as such static objects are quite small.

Key retention paths of a lambda with no closure

Worth knowing

In the pre-Roslyn-compiler era (Visual Studio 2013 and earlier), a lambda without a closure would compile into a static method inside the same class that calls the lambda. For the sake of performance, a separate class is now created. You can find more details in this Stack Overflow discussion.

The main takeaway

If you’re curious what a ...+<>c... object makes in your snapshot, check the memory traffic. If the object is created only once and is retained via static references, then it’s definitely a lambda without a closure. Should you worry? Absolutely not. This is normal compiler behavior.

A lambda with a closure

Example

Of course, a more universal approach would be to make the maximum string length a variable:

public class FilterTestClosureInArg
{
   public void FilterLongString(int length)
   {
       var list = new List<string> {"abc", "abcd", "abcde"};
       var result = StringUtils.Filter(list, s => s.Length > length);
       Console.WriteLine(result.Count);
   }
}

Decompiled code

But how will this be compiled?

public static class FilterTestClosureInArg
{
   public static void FilterLongString(int length)
   {
       FilterTestClosureInArg.<>c__DisplayClass0_0 cDisplayClass00 = 
          new FilterTestClosureInArg.<>c__DisplayClass0_0();
       cDisplayClass00.length = length;
       List source = new List<string>();
       source.Add("abc");
       source.Add("abcd");
       source.Add("abcde");
       // ISSUE: method pointer
       Console.WriteLine(StringUtils.Filter(
         source, new Func<string, bool>((object) cDisplayClass00, __methodptr(b__0))).Count);
   }

   [CompilerGenerated]
   private sealed class <>c__DisplayClass0_0
   {
       public int length;

       public <>c__DisplayClass0_0()
       {
           base..ctor();
       }

       internal bool b__0(string s)
       {
           return s.Length > this.length;
       }
   }
}

Here the compiler acted differently. It still created a separate class <>c__DisplayClass0__0, but the class no longer contains static fields. The length argument passed to a lambda is made a public field of the <>c__DisplayClass0__0 class. And what is most important, an instance of <>c__DisplayClass0__0 is created each time the FilterLongString() is called:

FilterTestClosureInArg.<>c__DisplayClass0_0 cDisplayClass00 = new FilterTestClosureInArg.<>c__DisplayClass0_0();

This results in huge memory traffic if the FilterLongString() method stays on a hot path (is called frequently). Let’s check this out in dotMemory.

What about dotMemory?

public static void Main(string[] args)
{
   var filterTest = new FilterTestClosureInArg();
   for (int i = 0; i < 10000; i++)
   {
      filterTest.FilterLongString(3);
   }

   Console.ReadLine(); // the snapshot is taken here
}

If we look at the Memory Traffic view, we’ll see that there were 10,000 instances of the <>c__DisplayClass0__0 class created and collected.

Memory traffic of a lambda with closure

This is not good from the performance point of view. While object allocation costs almost nothing, object collection is a much more “heavyweight” operation. The only way to avoid this is to rewrite the code so that it doesn’t contain closures. If you want more examples on the topic, take a look at our previous post, Unusual Ways of Boosting Up App Performance. Lambdas and LINQs.

Note that a closure is created in all cases where you pass local context. For example, a local variable passed to a lambda leads to a closure as well:

public class FilterTestClosureInLocalVar
{
   public void FilterLongString()
   {
       var length = 3;
       var list = new List<string> {"abc", "abcd", "abcde"};
       var result = StringUtils.Filter(list, s => s.Length > length);
       Console.WriteLine(result.Count);
   }
}

Worth knowing

LINQ queries have the same implementation under the hood, so the ...+<>c__DisplayClass... classes may indicate not only lambdas with closures but LINQs as well.

The main takeaway

When inspecting a snapshot, it always worth it to take a look at memory traffic. If you see a lot of allocated/collected objects with ...<>c__DisplayClass... in their names, you’ll know these are lambdas with closures. When examining these objects, ask yourself two questions:

  1. Do they really impact performance?
  2. If yes, can you rewrite the code without a closure?

A method group instead of a lambda

Example

Back in 2006, C# 2.0 introduced the ‘method group conversion’ feature, which simplifies the syntax used to assign a method to a delegate. In our case, this means that we can replace the lambda with a method and then pass this method as an argument.

public class FilterTestMethodGroup
{
   public void FilterLongString()
   {
       var list = new List<string> {"abc", "abcd", "abcde"};
       var result = StringUtils.Filter(list, Compare);
       Console.WriteLine(result.Count);
   }

   private bool Compare(string s)
   {
       return s.Length > 3;
   }
}

Decompiled code

We want to know if this approach will generate memory traffic. A quick look at the decompiled code says that yes, unfortunately, it will.

public class FilterTestMethodGroup
{
   public void FilterLongString()
   {
       List source = new List<string>();
       source.Add("abc");
       source.Add("abcd");
       source.Add("abcde");
       // ISSUE: method pointer
       Console.WriteLine(StringUtils.Filter(
         source, new Func<string, bool>((object) this, __methodptr(Compare))).Count);
   }

   private bool Compare(string s)
   {
       return s.Length > 3;
   }

   public FilterTestMethodGroup()
   {
       base..ctor();
   }
}

The compiler doesn’t create an additional class. Instead, it simply creates a new instance of Func<string, bool> each time FilterLongString() is called. Called 10,000 times, it will spawn 10,000 instances.

Memory traffic from a method group

The main takeaway

When using method groups, exercise the same caution as when you use lambdas with closures. They may generate significant memory traffic if on a hot path.

We hope this post has made things a little clearer. Now, if you encounter classes with ...+<>c... in their names, first of all, know that these are either lambdas or LINQs and the potential problem here is related to memory traffic because of closures.

Still don’t have dotMemory, but want to check your application for memory traffic? You’re welcome to download and try dotMemory free for 5 days of actual use.

(Note that the easiest way to install dotMemory is to use our Toolbox App. All you need to do is click Install next to dotMemory Portable.)

The post What Do These …<>+c… Classes Do in my Memory Snapshots? appeared first on .NET Tools Blog.

Profiling Mono and Mono Unity Apps on Windows, macOS, and Linux

$
0
0

Back in Rider 2018.3, we added a performance profiler to Rider. As we try to deliver new features as quickly as possible, the integrated profiler had some limitations: it only supported the profiling of .NET Framework and .NET Core applications on Windows. Now, it’s time to fill the gap. In Rider 2019.1, the profiler gets support for Mono and Mono Unity – and not just on Windows but on macOS and Linux as well.

Let’s look at what it can do for you.

DOWNLOAD Rider 2019.1 EAP

What is currently supported?

Windows macOS Linux
.NET Framework + n/a n/a
.NET Core 1.0+ +
Mono 5.10+ + + +
Mono Unity 2018.3+ + + +

As you can see, the profiler still lacks support for .NET Core on macOS and Linux. Unfortunately, we can’t do everything at once, so this will happen in a future release. Also, as of EAP3, there are no console profiling tools for macOS and Linux. But, unlike the support for .NET Core, the tools will be added in one of the upcoming 2019.1 EAP releases.

How to profile Mono apps?

Absolutely no changes here. The workflow is the same as with regular .NET Framework apps (more details in this post):

  1. Choose a run configuration (what you’re going to profile).
  2. Choose a profiling configuration (how you’re going to profile). Note that there’s a special profiling configuration for Mono apps: Timeline (Mono). Because of limitations in Mono, other profiling modes are not available.
  3. Run profiling and take snapshots.

Profiling Mono apps on macOS

How to profile unit tests?

Profiling of Mono unit tests is also supported. Simply select a unit test and then run Profile from the context menu.

Profiling Mono apps on macOS

How to profile Mono Unity apps?

This one is a little bit more complicated. The workflow is almost the same, but there’s a difference when it comes to running a profiling session:

  1. Choose a run configuration: For a Unity app, it’s always either Attach to Unity Editor or Attach to Unity Editor and Play configs.
  2. Choose a profiling configuration. As well as with Mono apps, there’s a separate profiling configuration: Timeline (Unity).
  3. Run profiling and get snapshots.
    When profiling a Unity application, the profiling target is always the Unity Editor. The problem is that the Editor is probably running simultaneously with Rider, so you cannot start profiling it. Therefore, when trying to run Timeline (Unity), Riders asks you to:
    1. Close the Unity Editor first.
    2. Run the Timeline (Unity) profiling one more time. Rider will take care of the rest: it will run the Unity Editor and start the profiling session.

Profiling Unity apps on macOS

Note that instead of restarting Unity Editor, you have an alternative to simply run the compiled game under profiling. In this case, in step 1 you should select the Unity Executable run configuration and specify the path to the game executable.

Profiling Unity apps on macOS

Another usage of the Unity Executable configuration is to run one more Unity Editor instance with profiling enabled. All you need is to specify the path to the Unity Editor executable instead of the game.
Profiling Unity executable

That’s all for now! To try new Rider profiling capabilities, download the latest Rider 2019.1 EAP. Stay tuned!

The post Profiling Mono and Mono Unity Apps on Windows, macOS, and Linux appeared first on .NET Tools Blog.

Code Coverage on macOS and Linux in Rider 2019.1

$
0
0

Rider 2019.1 brings a lot of good news for macOS and Linux users: our profiling and code coverage tools are now supported (to varying degrees) on macOS and Linux. In this post, we’ll dive deeper into the code coverage updates in Rider.

What is currently supported?

Windows macOS Linux
.NET Framework + n/a n/a
.NET Core + (1.0 and later) + (2.0 and later) + (2.0 and later)
Mono
Mono Unity

In this release, you can analyze the coverage of your .NET Core unit tests. This works on all operating systems. Support for Mono and Mono Unity is still a work in progress and is planned for 2019.2. Note that as of EAP 04, the code-coverage console tool is not yet available.

How to use code coverage in Rider?

As usual: run coverage analysis either from the Unit Tests window or right from the editor.

Unit tests coverage analysis on macOS

Continuous testing works as well:

Continuous testing on macOS

Note that if you try to run coverage analysis for a Mono application, the Unit Tests window will ignore the tests and mark them as Not supported. The Unit Tests Coverage window will contain no coverage results.

No coverage results

One more note about our licensing model. Code coverage and profiling features are available for you if you have a ReSharper Ultimate + Rider subscription or an All Products Pack subscription. Or you can use any of the Rider EAP versions, which always include the full feature set.

Download the latest Rider EAP and try code coverage and other Rider features. As always, we would greatly appreciate your feedback!

The post Code Coverage on macOS and Linux in Rider 2019.1 appeared first on .NET Tools Blog.

Console Profiler replaces Remote Profiling in dotMemory 2019.2

$
0
0

dotMemory 2019.2 no longer supports remote profiling. Remote profiling allowed you to profile an application running on a remote computer with dotMemory running locally. So, why did we get rid of it?

First, our statistics showed that the user base of the feature was vanishingly small. Not least because of the fact that remote profiling requires some effort from an end-user. You have to launch the remote agent on a remote machine, make sure that the agent is not blocked by a firewall, and so on. This presents certain challenges, especially in production environments.

Second, we believe that the dotMemory console profiler is a better choice for profiling on a remote computer. The process doesn’t require that much preparation and is easily reproducible (you create one batch script and use it anytime).

Before 2019.2, the console profiler could not fully replace the standalone dotMemory as it lacked support for many application types. The 2019.2 release has fixed this issue by adding separate commands for all kinds of applications:

  • get-snapshot – for attaching to a running .NET application and getting a single snapshot.
  • attach – for attaching to a running .NET application and getting any number of snapshots.
  • start – for starting a standalone .NET application under profiling.
  • start-netcore – for a .NET Core application.
  • start-iis – for (re)starting IIS under profiling.
  • start-iis-express – for an IIS Express–hosted web application.
  • start-windows-service – for (re)starting a managed Windows service under profiling.
  • start-wcf-service – for a WCF host with a specified WCF Service Library.
  • start-winrt – for a WinRT application.
  • profile-new-processes – for profiling any managed application that is launched after this command.

To use the console profiler, first, you need to download it. It’s worth noting that the tool is distributed free of charge and does not require any licenses to run.

Getting the console profiler

  1. Download a .zip file with the console profiler from our download page.
  2. Unzip the file on the remote computer.

Actually, that’s it – you’re ready to go! If you prefer NuGet, you can download the corresponding package.

Getting snapshots instantly

The most typical scenario is that some application has a memory problem and you want to instantly get a memory snapshot. For example, there’s an already running application with PID=1234 (it could be a standalone .NET / .NET Core application, IIS server, or a service).

To get a snapshot, run:
dotMemory.exe get-snapshot 1234

If you don’t know the PID but know the name (say, MyApp), run:
dotMemory.exe get-snapshot MyApp --all

Note the --all option at the end. If there’s more than one MyApp process, dotMemory will take a snapshot of each process.

Monitoring applications

Sometimes, you need to track application memory consumption over a long time interval. In this case, you can start or attach to your application under profiling and get snapshots only if a particular condition is satisfied, that is, if a periodic time interval ends, or memory consumption increases by a specified value.

For example, if you want to profile some myapp web app running on IIS server and take snapshots every 5 hours:
dotMemory.exe start-iis --trigger-timer=5h --open-url=localhost/myapp --use-browser=Chrome

Or, for example, if you want to take snapshots only when memory consumption increases by 50%:
dotMemory.exe start-iis --trigger-mem-inc=50% --open-url=localhost/myapp --use-browser=Chrome

Getting snapshots when you want

Sometimes, it’s necessary to have direct control over the profiling process; for example, you need to get a snapshot at some exact moment by a direct command. This can be done by sending special messages to stdin of dotMemory.exe.

Suppose we want to profile some IIS Express-hosted application with a configuration file. Follow these steps:

  1. Start a profiling session:
    dotMemory start-iis-express --service-output --config=.idea\config\applicationhost.config --site=MyWebApp
    Note the --service-output argument. It tells dotMemory to send service messages to stdout, like
    ##dotMemory["connected"]
  2. When you want to take a snapshot, send the following command to stdin:
    ##dotMemory["get-snapshot"]
    After the snapshot is taken, dotMemory will send the following to stdout:
    ##dotMemory["workspace-saved", {path: "..."}]

In this post, we’ve just touched on some of the console profiler’s capabilities. For more information about the tool, please refer to our online documentation or the console profiler’s built-in help:
dotMemory.exe help

To test the updated dotMemory console profiler in action, download it using this link. Note that to examine the snapshots collected by the console profiler, you should use the standalone version of dotMemory.

We’d love to hear your feedback about the console profiler!

The post Console Profiler replaces Remote Profiling in dotMemory 2019.2 appeared first on .NET Tools Blog.

Cross-Platform dotCover Console Runner and more – What’s New in dotCover 2019.2

$
0
0

In the 2019.2 release, the dotCover team was mainly focused on the console runner tool. The result is some good news about the tool, including one important fix specially for .NET Core users:

  • The dotCover console runner is available not only on Windows, but on macOS and Linux as well.
  • The console runner can perform coverage analysis in Mono projects and gets a new command for analyzing .NET Core projects.
  • The issue with 0% coverage and .NET Core unit tests was fixed. The dotnet dotCover test workaround is no longer needed, but you can still use it if you find this way of running coverage analysis more convenient.

Let’s try these brand-new features in action: we’ll profile some unit tests targeting .NET Core and Mono and we’ll do this on Mac. In addition, we’ll recall how you can use dotnet dotcover test.

Getting the console runner

The console runner is distributed as a .zip archive. When downloading the tool, make sure to select the target operating system:
Download dotCover

Another option is to get the console runner as a NuGet package:

dotCover console runner NuGet package

After you download and unpack the console runner, it makes sense to add the tool’s location to system PATH (so that you can run it from everywhere). Note that the tool on Windows is dotCover.exe, while on Unix systems, it is dotCover.sh.

We’re ready to start coverage analysis!

Getting .NET Core unit tests coverage

For sure, the most demanded feature for the console tool was .NET Core support. We added a new command that runs the dotnet driver: cover-dotnet or a shorter version – dotnet. It runs dotnet under coverage analysis (as if you specified the path to dotnet in --TargetExecutable).

When we want to get coverage of tests in some project, all we need to do is open the solution folder and run:

dotCover.sh dotnet --output=myRep.html --reportType=HTML -- test

Here’s what will happen:

  • In this example, we run the command from the project folder. Alternatively, you can specify the full path to the project after test.
  • --output=myRep.html --reportType=HTML tells dotCover to generate a coverage report in the HTML format.
  • -- test at the end is an argument passed to dotnet. Actually, you can replace it with --TargetArguments="test" but that is much longer.
  • You’ve probably noticed that we’re using Unix-style syntax for command-line arguments. You can use this syntax on Windows as well. For example, the following is interchangeable on Windows (but not on macOS and Linux):
    /TempDir:"../myDir" and --tempDir="../myDir"
  • In this example, we’re getting coverage of unit tests, but you can also use dotCover.sh for covering applications.

dotCover.sh dotnet

Getting .NET Core unit tests coverage with dotnet dotCover test

Some time ago, dotCover added the ability to run coverage analysis directly via the dotnet driver:

dotnet dotCover test

Initially, the feature was added as a workaround for this issue. The issue is fixed in 2019.2, so, technically, the workaround is no longer needed. Nevertheless, many dotCover users find this ability useful in different cases. The main advantage here is that you don’t need to download the dotCover console runner (a plus in, say, docker containers). So, we will keep supporting this feature.

We did make some changes in 2019.2. The main change is the name of the package you must specify in .csproj.

Here are the steps to make dotnet dotcover test work (steps 1-4 are made only once per project):

  1. Go to nuget.org and find the JetBrains.dotCover.DotNetCliTool package.
  2. Do NOT reference this package in your unit tests project. Instead, open its .csproj file and add a line that contains the package name and current version into an <ItemGroup> element:
    <DotNetCliToolReference Include="JetBrains.dotCover.DotNetCliTool" Version="2019.2.0.1" />
  3. In the command line, go to the directory containing your unit tests project.
  4. Run dotnet restore
  5. Run tests with coverage analysis: dotnet dotсover test

Getting Mono unit tests coverage

There’s also a new command for Mono projects: cover-mono or just mono. The workflow here is somewhat different compared to dotnet: to run tests, you must explicitly specify unit test runner executable. For example, with xUnit, it may look as follows:
dotCover.sh mono --output=myRep.html --reportType=HTML -- ../../../packages/xunit.runner.console.2.4.1/tools/net472/xunit.console.exe xUnitTests.dll -noappdomain

Some notes:

  • -output=myRep.html --reportType=HTML tells dotCover to generate a coverage report in the HTML format.
  • At the end, after the double dash --, we specify a path to xUnit console runner (in our example, it is referenced by the project) and a path to a .dll with tests.
  • -noappdomain tells the xUnit runner to not use app domains when running test code. For details, see the list of known issues below in this post.

dotCover.sh mono

Unfortunately, there are a number of known issues with Mono projects for now:

  • It is not possible to profile child processes. As a result, it is not possible to get coverage of NUnit tests (NUnit runs tests in its child processes).
  • If code runs in a child domain, you’ll get zero coverage. That’s why it’s important to specify the -noappdomain argument when getting coverage of Mono tests.
  • Default coverage filters for filtering out system and unit testing framework assemblies do not work. You will see these assemblies in coverage results.
  • It’s not possible to merge snapshots.

We look forward to your questions and remarks – feel free to ask in the comments below! It’s also worth mentioning that dotCover is a part of ReSharper Ultimate which is available for download here.

The post Cross-Platform dotCover Console Runner and more – What’s New in dotCover 2019.2 appeared first on .NET Tools Blog.

What’s New in dotTrace 2019.3

$
0
0

The 2019.3 release brings a lot of good news for dotTrace users, especially for those who want to profile their apps on macOS and Linux:

  • First of all, the dotTrace command-line profiler is available for both Linux and macOS.
  • Second, we’re adding support for .NET Core on these systems.
  • The third cool thing is about the Timeline Viewer: we’re introducing the flame graph to the call tree to make it easier to analyze.

Now, let’s proceed to the details!

.NET Core support on macOS and Linux

To profile .NET Core applications on macOS and Linux, you can use either JetBrains Rider or the command-line profiler (for more details see the next section). The profiling workflow is almost the same as for any other type of app, but there are some important notes:

  • Only the Sampling profiling type is supported.
  • Because of some limitations of .NET Core, there may be some issues with profiling projects that target .NET Core 3.0 or earlier. In some cases, the profiled application may hang or crash. Projects targeting .NET Core 3.1 can be profiled without any issues.

To profile a .NET Core 3.1 application in JetBrains Rider, simply select the Sampling profiling configuration and start the session as usual. But if you need to profile a project targeting .NET Core 3.0 or earlier, you should create a separate config. Follow these steps:

  1. Open the project you want to profile.
  2. Create a new profiling configuration:
    1. On the toolbar, in the list of profiling configurations, select Edit Configurations…
    2. In the opened Profiling Configurations window, click + Add profiling configuration to add a new profiling configuration.
    3. In the list, select Sampling (.NET Core 3.0 and earlier).
    4. Specify the configuration Name and the other profiling options.
      dotTrace. Support for .NET Core on macOS and Linux
    5. Click OK.
  3. To run profiling, select the newly created configuration on the toolbar, and click the profiling button.

Command-line profiler on macOS and Linux

The command-line profiler for Unix systems is distributed as .tar.gz and as a NuGet package. In terms of functionality, dotTrace.sh is almost equal to its Windows counterpart, ConsoleProfiler.exe. The only difference is some OS-specific dotTrace features. For example, as you already know from the previous section, we intentionally added a separate profiling option for .NET Core 3.0 and earlier versions. To profile such an app using the console profiler, you should specify the --support-earlier-net-core parameter:

./dotTrace.sh start --save-to=./snapshot.dtp --support-earlier-net-core MyApp.exe

Call tree as a flame graph

And here’s the cherry on top that’s waiting for you in dotTrace 2019.3: the flame chart in the Timeline Viewer (for the standalone dotTrace on Windows). The flame graph is a graphical representation of the Call Tree. Each call is shown as a horizontal bar whose length depends on the call’s total time, which equals the function’s own time + the time of all its child functions. The longer the call, the longer the bar.
To open the graph, click Flame graph in the Call Tree. This will build the graph for the current tree. Note that the graph is smart enough to hide functions that have zero (or close to zero) own time:

dotTrace. Flame graph in Timeline Viewer

The coolest thing about the flame graph is that you don’t need to thoroughly analyze the time of each call, as you immediately see it on the graph.

If you feel like trying any of these features right now, download and install ReSharper Ultimate, Rider, or the dotTrace command-line tools. If you have any questions, post a comment here and we’ll reply as best we can. Thanks!

The post What’s New in dotTrace 2019.3 appeared first on .NET Tools Blog.


What’s New in dotCover 2019.3

$
0
0

In 2019’s last release, dotCover is about to receive its fair share of upgrades:

  • Support for Unity tests in JetBrains Rider.
  • Support for Microsoft Fakes in Visual Studio 2017 or later.
  • The ability to group coverage results by nested namespaces in both Rider and Visual Studio, and in reports generated by the dotCover console tool.

Let’s check out the details!

Unity support

Adding support for Unity tests was the main focus of the 2019.3 release. Here are the highlights:

  • Support for coverage analysis in Unity projects is available only in JetBrains Rider.
  • To run coverage analysis, your Unity project must have the Rider Editor and Test Framework packages. Version requirements:
    Unity Rider Editor Test Framework
    2018.3 – 2019.1 any any
    2019.2 and later 1.2.0 or later any
    Earlier than 1.2.0 1.1.1 – 1.1.3

    Note: To be able to run tests, make sure your project has Test Framework 1.1.1 or later.

  • To run coverage analysis, Unity Editor must be started in the special mode with coverage support enabled.
  • Only edit mode tests are supported.

The workflow is as follows:

  1. Open your Unity solution in Rider.
  2. On the Unity toolbar, choose Start Unity with Coverage:
    Start Unity Editor with Coverage
    This will run Unity Editor with coverage support enabled (your Unity project will be opened automatically).
  3. In Rider, open the Unit Tests window, select the desired tests, and click the Cover Selected Unit Tests button.
    dotCover. Cover Unity Tests
    This will run the tests with coverage analysis enabled. Once it’s ready, you will get the results in the Unit Tests Coverage window.

Microsoft Fakes Support

Not much to say here. If you have tests that use Microsoft Fakes, dotCover will calculate their coverage. As easy as that!

dotCover. Microsoft Fakes Support

Note that Microsoft Fakes is supported not only by dotCover in Visual Studio (2017 or later), but also by the dotCover command-line tool. If you use the latter, you should run tests using vstest.console.exe (from Visual Studio 2017 or later).

Grouping Coverage Results by Namespaces

It’s a good practice to divide product features into separate namespaces. Now, you have the ability to see coverage results for a particular namespace, i.e. see the coverage of a particular product feature. This is how it works:

dotCover. Flatten Namespaces

You’re welcome to download the latest ReSharper Ultimate 2019.3 or Rider 2019.3 and check out the new dotCover capabilities for yourself. We look forward to hearing your opinions in the comments below!

The post What’s New in dotCover 2019.3 appeared first on .NET Tools Blog.

Auto-Detect Memory Issues in your App with Dynamic Program Analysis – Rider 2020.1

$
0
0

It seems that a common problem among profiling tools (including ours) is that they require too much effort from a developer. Profiling currently is seen as some kind of a last resort for when something has gone horribly wrong. The use of profilers is very episodic and chaotic, and it’s quite often ineffective because you simply can’t be an expert in a tool you use only once every six months. We find this kind of sad because we strongly believe that regular profiling is essential for product quality.

That being said, is there any solution? Well, our answer is “yes,” but with some caution. Since we can’t force people to use profilers all the time, the only possible solution is to make issue analysis automatic and move it as close to the user as possible. So, without further ado, we’re pleased to introduce Dynamic Program Analysis (DPA)!

What is Dynamic Program Analysis (DPA)?

DPA is a process that runs in the background of your IDE and checks your application for various memory allocation issues. It currently checks for closures and allocations to large and small object heaps (LOH and SOH).

If a method call allocates more than the specified threshold, DPA will mark it as an issue.

DPA. Issue highlighting

DPA starts automatically each time you run or debug your application in Rider. The allocation data is collected with almost zero overhead (slowdown is no more than 2% in large solutions with high memory traffic).

Note that DPA currently supports .NET Framework and .NET Core on Windows only.

Why trace memory allocation?

From our experience, a significant number of performance issues are related to excessive memory allocation. To be more precise, they are related to full garbage collection caused by allocations. Quite often, such allocation issues are the result of bad code design and can be easily fixed.

Here’s a simple example: using struct instead of class. If a type represents a single value and is immutable, it can be defined as a struct. Defining it as a class makes it a reference type. As a result, its instances are placed on the managed heap instead of the stack, which means the instances that are no longer needed must be garbage-collected. You can find more examples in this series of blog posts.

Of course, we don’t want to limit DPA to memory allocation analysis only. In the future, we plan to add more runtime inspections, such as HTTP and SQL request analysis.

How to enable DPA

Typically, enabling DPA doesn’t require any additional actions from your side. All you need to do is install the JetBrains ETW Host service (after the 2020.1 release, the service will be installed along with Rider). DPA will then be activated when Rider starts. That’s it!

How to work with DPA

The coolest thing about DPA is that your typical workflow is not affected in any way! If you feel like you have time to go through allocation issues in your application, simply check the DPA icon status.

Dynamic Program Analysis Workflow

In more detail:

  1. Every time you finish running or debugging your project, pay attention to the DPA icon in the status bar. If it’s red like this, DPA. Issues found , click it and choose View Issues.
  2. Go through the list of issues. At this step, you can:
    • Double-click the issue to view its stack trace.
    • Navigate from an issue to the corresponding code in the editor (with F4).
    • And navigate back (with Alt+Enter -> View related memory allocation issues).
  3. If you think that the issue can be fixed, try fixing it using our tips.
  4. After you fix the issue, run the project one more time and make sure it no longer appears on the DPA list.
  5. If you think that the issue cannot be fixed, suppress this issue. One more way to make the issue disappear from the list is to increase the memory allocation threshold. You can find more details about this in the next section.
  6. Ideally, you should get the green DPA icon, which looks like this: DPA. No issues icon.

Excluding False Positives

All programs require memory. A method may sometimes allocate a lot of memory – not because of code design, but just because this is required by the current use case.

If this is the case for just a few methods, the best solution is to suppress these issues. This is done by marking the corresponding method with the SuppressMessage attribute. To do his easily, use the quick-fix in the editor:

DPA. Suppress issues

If you’re getting a lot of false positives, then you can simply increase the memory allocation thresholds. To do this, navigate to the Thresholds tab of the Dynamic Program Analysis window and set new thresholds for each issue type.

DPA. Thresholds

Heap Allocations Viewer plugin

The Heap Allocations Viewer plugin highlights all the places in your code where memory is allocated. It’s a perfect match for DPA – together they make a perfect couple.

While the plugin shows allocations and describes why they happen, DPA shows whether a certain allocation is really an issue.

DPA and Heap Allocations Viewer plugin

Feel free to download the latest Rider EAP and give DPA a try. And of course, please leave your feedback for us in the comments below.

The post Auto-Detect Memory Issues in your App with Dynamic Program Analysis – Rider 2020.1 appeared first on .NET Tools Blog.

Memory profiling on Linux and macOS with dotMemory 2020.2

$
0
0

Version 2020.2 EAP01 finally brings dotMemory to Linux and macOS! For these systems, dotMemory is currently available only as a command-line tool. The tool is free and lets you take and save memory snapshots. To analyze the snapshots, you still need the standalone version of dotMemory, which is only available on Windows.

What you can profile

Here’s the dotMemory compatibility list for Linux and macOS:

dotMemory compatibility on macOS and Linux

How it is distributed

The command-line tool is distributed in two forms:

  • A .tar.gz archive (macOS | Linux): Download it from the JetBrains website, unpack, and run dotMemory.sh with the required arguments.
  • A NuGet package at nuget.org (macOS | Linux): Using this could be more convenient in automation scenarios (e.g. on a CI/CD server): reference the package in your project, and run nuget restore to get the tool on your server.

How to use it

If you have already used the dotMemory command-line profiler on Windows, you will find almost no differences (the only difference is the reduced set of commands, as some types of apps like IIS are not available on Linux and macOS). For those who haven’t, let’s go through the basics.

Get a snapshot of a running .NET Core app

The fastest way to get a snapshot of a running process is to use the get-snapshot command. For example, this command takes gets a snapshot of the process with ID 5589:
./dotMemory.sh get-snapshot 5589

Run a .NET Core app under profiling

Note that to perform get-snapshot, dotMemory has to attach to the process, and therefore it has the same limitations as the attach command: it’s applicable only to processes that use .NET Core version 3.0 or later on Linux or .NET 5 on macOS. So, to profile an app that uses .NET Core 2.2 or earlier, you should run this application under profiling with the start-net-core command. For example:
./dotMemory.sh start-net-core MyKestrelAspApp.dll

But how can you get snapshots? One of the ways is to send commands to dotMemory’s stdin. For example, to get a snapshot, send:
##dotMemory["get-snapshot"]

Running dotMemory on macOS

Another way to get a snapshot is to set a condition for taking the snapshot. This can be great for monitoring applications. For example:

  • ./dotMemory.sh start-net-core --trigger-timer=10h MyKestrelAspApp.dll
    This will start MyKestrelAspApp.dll and take snapshots every 10 hours.
  • ./dotMemory.sh start-net-core --trigger-mem-inc=50% --trigger-delay=5s MyKestrelAspApp.dll
    This will start MyKestrelAspApp.dll and take snapshots when memory consumption increases by 50% compared to when the profiling session started. The 5-second delay is used to ignore the application’s startup activities.

We invite you to download the tool and try it on Linux and macOS for yourself. As always, your comments are greatly appreciated.

P.S. If you are looking at other possibilities for profiling your code, take a look at dotMemory self-profiling API. It lets you profile your application right from the source code, is distributed as a NuGet package, and works on all platforms.

The post Memory profiling on Linux and macOS with dotMemory 2020.2 appeared first on .NET Tools Blog.

Viewing all 58 articles
Browse latest View live


Latest Images