The IntelliJ Rust Blog : Open-Source Rust Plugin for IntelliJ-based IDEs | The JetBrains Blog https://blog.jetbrains.com Developer Tools for Professionals and Teams Fri, 05 May 2023 21:25:07 +0000 en-US hourly 1 https://blog.jetbrains.com/wp-content/uploads/2023/02/cropped-icon-512-32x32.png The IntelliJ Rust Blog : Open-Source Rust Plugin for IntelliJ-based IDEs | The JetBrains Blog https://blog.jetbrains.com 32 32 New in IntelliJ Rust for 2023.1 (Part 2) https://blog.jetbrains.com/rust/2023/05/05/new-in-intellij-rust-for-2023-1-part-2/ Fri, 05 May 2023 14:46:51 +0000 https://blog.jetbrains.com/wp-content/uploads/2023/04/Blog_Featured_image_1280x6001-1.png https://blog.jetbrains.com/?post_type=rust&p=349103 In the first part of this “What’s New” series, we only saw the tip of the iceberg – various feature bits that the Rust plugin developers managed to implement during the release cycle.

Now we’re going to show you the rest of the iceberg. Let’s dig in and look at all the new ways the plugin can now analyze Rust code.

Code insight

Improved Self keyword

While being referred to in the impl block, the Self synthetic type supports the missing Create-field quick-fix. When a struct with the Self keyword is initialized, you get a choice of whether to inject the missing field in the struct if the field is absent in the definition.

Analogously, the Convert struct to tuple refactoring provides the ability to convert a struct declared via block with named fields and corresponding types into the struct defined as the tuple, and vice versa. The refactoring now works across all Self usages without exceptions. 

What’s more, the Self type is currently opaque enough for the code insight to suggest the Elide lifetimes quick-fix on self receiver parameters with Self type and declared with the excessive lifetime parameters on them. The fix is suggested when a receiver self has either the type equal to referenced with lifetime Self or referenced with lifetime Self applied as a type parameter to such smart pointers as Box or Rc.

Optimized import action

Unused imports declared only inside functions are now considered potentially removable lines of code. As of now, an additional org.rust.macros.proc experimental feature needs to be enabled in order for the feature to work as intended. But you can rest assured that this will be changed soon!

Method suggestions

The code completion suggestion algorithm now filters out inaccessible methods from the list of offered methods, leaving only valid records. Combined with constantly improving ML completion, the Rust plugin’s autocompletion subsystem focuses on providing you with precise and pertinent completion predictions.

Run / Debug

References and pointers content

Automatic dereferencing of Rust references and pointers is possible when debugging with LLDB. The debugger is currently capable of rendering the contents of &Vec, *const Vec, and *mut Vec while also allowing you to inspect Vec content itself – a new feature in this release, too. It can also disclose multiple references applied one after another in order to reach the content type behind references.

In addition to the above, the Rust plugin correctly shows the contents of raw slice pointers (*const [T], *mut [T]) in the debugger. Looking behind raw slice pointers is now possible either with the LLDB debugger or any other debugger that your system supports.

Skip stepping into stdlib sources

The Rust plugin provides an additional debugging preference, allowing you to skip standard library sources: std, core, and alloc. The dedicated checkbox controls the behaviors under Settings / Preferences | Build, Execution, Deployment | Debugger | Stepping (at the very bottom of the window).

Error detection

This section represents the sparse standard errors that the Rust plugin can detect. The errors themselves are in line with those defined by Rust’s error codes index.

Patterns as arguments

  • E0130: An argument inside a foreign function declared as a pattern is prohibited.
  • E0642: Trait methods cannot take patterns as arguments.

Controls

  • E0571: A break statement with an argument appears outside of a loop construction.
  • Highlight error for an underscore (_) expression appearing on the non-left-hand side of an assignment expression.

Traits

  • E0206: The copy trait implementation was tried on a non-enum or non-struct type.
  • E0224: At least one trait is required for an object type.
  • E0323, E0324, E0325: A trait item implementation was expected, but the associated const / method / associated type doesn’t match the trait declaration.
  • E0785: The impl inheritance block was written on a dyn auto trait.

Foreign / unsafe

  • E0044: Foreign items must not have type or const parameters in their declaration.
  • E0197: Inherent implementations marked as unsafe.

Other

  • E0106: The lifetime Annotation error is shown properly in the function and types signature where it’s required.
  • E0131: The main function defined with generic parameters is not allowed.
  • E0203: Multiple relaxed bounds on a type parameter.
  • E0403: Repeating const generic parameters with duplicate names. By the way, other error messages for already defined names, like the E0428 error code, were updated in order to provide the same error message as the compiler.

Conclusion

That’s all for today! We’ve had fun sharing our most recent updates to the IntelliJ Rust plugin for the release cycle. In case you haven’t seen the first part of this series, check it out. Also, you can ping us on Twitter or file an issue in the plugin’s issue tracker. See you soon in the subsequent releases. Thank you for staying with us!

Your Rust team

JetBrains

The Drive to Develop

]]>
New in IntelliJ Rust for 2023.1 (Part 1) https://blog.jetbrains.com/rust/2023/04/11/new-in-intellij-rust-for-2023-1-part-1/ Tue, 11 Apr 2023 14:05:04 +0000 https://blog.jetbrains.com/wp-content/uploads/2023/04/Blog_Featured_image_1280x6001-1.png https://blog.jetbrains.com/?post_type=rust&p=341491 The time has come to outline the state of the IntelliJ-based IDEs’ Rust plugin as of the 2023.1 release.

In the following paragraphs, we’ll delve into the novelties, improvements, and refinements that our team has delivered throughout the release cycle. Follow the post to learn more about the significant changes and features. If you want to learn more about a particular item’s history and the details of its implementation, you’ll find relevant links throughout the post.

In the article, we’ll focus on several topics, including attribute and functional-like macros improvements and the IDEs’ conjoint code insight capabilities that resulted from them. We’ll also look at the progress in the field of Rust debugging, list the errors that the Rust plugin is now able to conclude, and address the main amendments improving the experience the Rust plugin provides when working with code.

Let’s go!

Language Support

GATs support

In development for several years, the GATs (generic associated types) language extension has been a long-awaited feature among the Rust community. We are happy to announce it is now supported by the Rust plugin. With the GATs release, associated types inside traits might capture generic parameters from the same trait methods in the form of lifetime or type parameters. The plugin now renders generic parameters for associated types in the Implement Members inspection.

In order to support generic associated types, we were required to update inspections for Rust type aliases and add changes to the internal parser itself. This allowed us to discern generic parameters for associated types and associated type bounds, as well as support where keywords, a common usage pattern for GATs. All this was done to gradually advance the Rust plugin so that it’s able to make basic type inferences for GATs.

Support for generic associated types is an ongoing task that continues to be refined. You may also consider contributing to our efforts by providing us with a problem description or suggestion regarding the recent language support feature.

Disable completion and auto-import for specified items

Controlling which item’s paths are auto-imported and participating in completion machinery are now possible thanks to the supplement table, which you can access via Preference | Editor | General | Auto Import | Rust. There, you can find the predefined list of exact items or catchall modules nominated for exclusion, with the possibility of defining your items from any crate.

Applying the use declaration lets the imported items bypass the exclusion settings, allowing refined control over items eligible to be shown in the completion list.

The excluded default items Borrow and BorrowMut elucidate the prime usages of this feature. Firstly, blanket implementations, especially those covering a whole set of types, may introduce the necessity to remove such extraneous elements from the completion list. Addressing this point in the scope of .borrow() and .borrow_mut() methods, we employ the heuristic approach by noticing that these methods are typically invoked on constrained type parameters and rarely used in other cases.

Secondly, tacit name clashes could introduce a change in code behavior in the same file. Again, the .borrow() method seems like a good candidate for exclusion as the Borrow:borrow clashes with RefCell::Borrow.

Both points demonstrate the unwanted ways the items in the default exclusion list may impact your code. It also expands the possibilities for practical applications of this feature.

Half-open range patterns support

At the time of writing the article (1.70.0-nightly), the half-open range patterns feature was still unstable and required a dedicated feature attribute to enable it. However, we still provided support for it. The latest Rust plugin releases provide code analysis with the capability to highlight invalid patterns, such as 1..=, 2…, …3, ..= or as errors.

Macros

During the release cycle, our team has provided a host of fixes and improvements to the nurturing of procedural macros of different kinds so that they’re on par with plain rust code outside of the macros’ scope. Let’s recap what was done exactly.

Inject language or reference inside macro body

With the recent updates comes the opportunity to mark the code block inside the macro invocation body as a specific language via the Inject language or reference intention. The IDE will handle the block as a syntactically complete language unit and provide comprehensive code assistance inside of it.

Attribute procedural macros

Type hints and hints for chained method calls

It would not be an exaggeration to say that code reference information, including type info,  allows reasoning about code quickly and exhaustively. Inlay type hints conveniently provide type information in the places across the code where type information is not given explicitly. It’s also time for this feature to appear inside procedural macros. Type hint deduction inside macros now works both for let values and intermediate type results in chained method calls.

Intention actions

Previously, intention actions, such as Specify type explicitly, were unavailable for code generated during attribute macro expansion. With the enabled experimental feature named org.rust.macros.proc.attr, everything generated on the stage of macro expansion will be available for a code analysis process, thus enabling the intention actions and inlay type hints mentioned beforehand.

Macros application for impl or trait members

Attribute macros being applied inside impl or trait blocks on a defined member might alter that member’s signature, which is crucial in code analysis. The org.rust.macros.proc.attrexperimental feature allows the Rust plugin’s engine to take into account the evaluated token stream as a resulting refreshed signature. The described preliminary application expansion contributes to providing correct type inference.

Errors highlighting

The Rust plugin employs its rich analysis tooling set for the whole range of Rust code. The latest releases enrich this powerful entity with yet another capability to use code inspections inside attribute macros to highlight errors there. In contrast to external linters like Cargo Check or Clippy, which have provided this functionality for a long time already, the Rust plugin utilizes its own techniques for rendering error messages right in the IDE code editor window.

Note that error highlighting appears only in items with successfully expanded attribute macros. Another point to keep in mind is that errors are highlighted if the annotation text range can be mapped to the macro call body.

Function-like macros

Type hints

Type hints for let values inside function-like macro call bodies are rendered in the same way as attribute macros.

Intention actions

Function-like procedural macros have previously supported almost no intention actions. Nevertheless, this has started to change from this release cycle. As of now, just like for the attribute procedural macros, another experimental feature named org.rust.ide.intentions.macros.function-like enables intention actions to work neatly in various scenarios

Conclusion

In the end, we are grateful to all the external contributors who support and help improve the plugin during this release cycle:


That was only the first part of the major highlights of the 2023.1 release. In the second part, we will look at improvements in the following areas: fresh code insight capabilities, further user experience enhancements related to Running / Debugging process, supplementary standard recognized errors and more.

We are happy to receive your feedback in the comments section below or on Twitter. Our issue tracker is there if you have a proposal or a bug to report. See you again, and thanks for staying tuned!

Your Rust team

JetBrains

The Drive to Develop

]]>
Learn Rust With JetBrains IDEs https://blog.jetbrains.com/rust/2023/02/21/learn-rust-with-jetbrains-ides/ Tue, 21 Feb 2023 12:00:29 +0000 https://blog.jetbrains.com/wp-content/uploads/2023/02/DSGN-15193_Blog_Featured_image_1280x600_2.png https://blog.jetbrains.com/?post_type=rust&p=326139 There is no royal road to learning a programming language; everyone does it differently. Some read a lot (books, blogs, tutorials, docs, Reddit discussions, StackOverflow answers, and more); some ask questions and look at examples; some write their own code and work on pet projects; some solve problems; some explore ecosystems – whatever works best for them. Some even do all of that. People usually start by learning language features and ways to combine them in programs. At some point they learn different approaches to problem-solving. Sooner or later they move from using standard library components to external libraries and exploring how to test, debug, write logs, profile their apps, and so on.

Rust is no exception: No single educational resource is enough to help you master it. Still, at JetBrains we have something to suggest – our free Learn Rust course, which covers many of the learner needs we’ve just mentioned.

The course borrows text from The Rust Programming Language, a book by Steve Klabnik and Carol Nichols with contributions from the Rust community. While it features most of the exercises from the well-known rustlings set, about a quarter of the exercises we designed specifically for the course. Rather than merely compile pre-existing materials, we carefully combined the texts and exercises and put them in an IDE format to create a new way to learn Rust.

Why Rust

Rust has been listed as the most-loved programming language in the StackOverflow developer survey for 7 years in a row, as well as the most-wanted (tied with Python).

Recently, Rust has made its way to the list of supported languages for writing Linux kernel components (and it’s the second-placed language in that list, right after C!). Libcurl, one of the most used libraries for fetching data over networks, is gradually moving toward Rust. Google reports that the share of Rust code in their Android implementation is increasing from release to release, and this helps them to reduce the risk of vulnerabilities and improve security. Microsoft heavily relies on Rust to provide memory safety in their products, while Amazon uses Rust to ensure the sustainability of their infrastructure.

Developers using other programming languages, such as JavaScript or Python, often turn to Rust when they need to achieve better performance for tooling and libraries.

Last but not least, Rust has a very welcoming community, always willing to help and encourage beginners.

With all that being said, potential learners should be aware that Rust has a difficult learning curve and requires a systematic approach to learning. The latter is exactly what we aim to provide with our Learn Rust course.

Learning Rust in an IDE

Our Learn Rust course is built on the educational platform provided by the JetBrains Academy plugin. This plugin is available in many JetBrains IDEs, including CLion, GoLand, and IntelliJ IDEA Community Edition, among others, and allows you to learn not only Rust but several other programming languages for free.

IDE-based courses involve reading educational materials, exploring code examples, and solving problems, structured into lessons and course sections. Each lesson includes a sequence of bite-sized steps, each being either a theory step with an example to play with, or a problem step with a problem to solve. Problem steps provide an easy way to check your solutions.

When working with any IDE-based course, you have a fully functional IDE window with 3 panels: a course view, a code editor, and a description, as in the screenshot below.

The added bonus of taking a course in an IDE is that, while learning a language, you gain software development experience at the same time. While performing the learning exercises, you also write, check, fix, run, debug, and test code, just as a real coder would as part of their daily software development routine. By the end of the course, you’re not only left with a solid knowledge of the programming language, but you’ve also familiarized yourself with a professional development tool and are well on your way to becoming a software developer.

Check out the IntelliJ IDEA for Education page to learn more about the benefits of learning programming in an IDE.

Course specifics

The Learn Rust course, just as the book it’s based on, assumes that you have a working knowledge of some programming language. Rather than offering an introduction to programming, it teaches you to program in Rust specifically, and as such skips most of the basic topics commonly found in programming courses.

Following the structure of The Rust Programming Language, the course contains the following sections:

  • Introduction
  • Common Programming Concepts
  • Understanding Ownership
  • Structs, Methods, Enums, and Pattern Matching
  • Modules
  • Common Collections
  • Error Handling
  • Generic Types, Traits, and Lifetime
  • Writing Automated Tests
  • Standard Library Types
  • Fearless Concurrency
  • Macros

Learn Rust contains 331 steps in total, including 210 theory steps and 121 problem steps. Each course step comes in the form of a Cargo package, making it possible to showcase and learn not only basic features of the Rust language, but also modules, crates, macros, package-level tests, external dependencies, etc.

Working on each step involves both reading and working with code. If the code in the step has the main function, it can be run:

For most problem steps, you are exposed to a single code file (like in the screenshot above), but sometimes you have access to the whole package structure so you can explore all the significant components, such as the project description files:

From this point, you can reach a crate’s documentation or learn about the available releases.

Naturally, while discussing testing Rust packages, we expose the test files:

You can run individual tests or all of them at once before attempting to check your solution by clicking on the Check button.

These course features are aimed at introducing you to the best practices of software development in IDEs as early as possible. We believe that using professional tools right from the beginning of your learning journey can be a huge benefit. With first-class Rust support in JetBrains IDEs, now you can do this as you learn Rust, too.

Learn Rust and tell us what you think!

Clearly, building a solid knowledge of Rust requires more than just following this course, but we think it can serve as a perfect start into the exciting world of Rust programming for many future developers. We hope you’ll enjoy studying Rust with us! Feel free to share your feedback in the comments section below or by contacting us at academy@jetbrains.com.

]]>
https://blog.jetbrains.com/zh-hans/rust/2023/02/21/learn-rust-with-jetbrains-ides/ https://blog.jetbrains.com/pt-br/rust/2023/02/21/learn-rust-with-jetbrains-ides/ https://blog.jetbrains.com/ko/rust/2023/02/21/learn-rust-with-jetbrains-ides/ https://blog.jetbrains.com/fr/rust/2023/02/21/learn-rust-with-jetbrains-ides/ https://blog.jetbrains.com/es/rust/2023/02/21/learn-rust-with-jetbrains-ides/
IntelliJ Platform: Latest Milestones and Achievements https://blog.jetbrains.com/rust/2023/02/13/intellij-platform-latest-milestones-and-achievements/ Mon, 13 Feb 2023 13:57:28 +0000 https://blog.jetbrains.com/wp-content/uploads/2023/02/Blog_Featured_image_1280x600-4.png https://blog.jetbrains.com/?post_type=rust&p=317841 Most JetBrains IDEs are built on top of the IntelliJ Platform, which is continuously being enhanced and improved in various ways. When the IntelliJ Platform team introduces a new feature or improvement to the platform, each IDE then “inherits” those, sometimes as-is and sometimes by adding customizations particular to the product and technology.

In this post, we’d like to give you an overview of the latest enhancements originating from the IntelliJ Platform. Some of them you may already be using; others you’ve heard about but haven’t tried yet; and some you might not be aware of at all. Hopefully, you’ll find something that will help raise your productivity or make your work more pleasant as you use the Rust plugin in IntelliJ IDEA, CLion, or GoLand.


IDE enhancements

New UI

IntelliJ-based IDEs are receiving a new UI, which provides easy access to essential features while progressively disclosing complex functionality. Our UI team carefully studied all the methods that assist developers while simultaneously reducing the burden of spending long hours in front of the monitor, thus improving the clarity of vision and thought. Clean, modern, and powerful – these are the words that best describe the upcoming UI!

The key changes are:

  • Simplified main toolbar with new VCS, Project, and Run widgets.
  • New tool windows layout.
  • New Light and Dark color themes.
  • Updated icon set.


To switch to the Beta version of the new UI, go to Settings/Preferences | Preferences | Appearance & Behavior | New UI.

Settings synchronization

It’s now possible to synchronize settings everywhere across all the JetBrains IDEs’ you use, including CLion, IntelliJ IDEA, GoLand, and others. The settings are stored in the cloud attached to your JetBrains Account, so you can conveniently reuse them. Enable or disable this behavior as you like and manage what precisely you would like to synchronize in Settings/Preferences | Settings Sync | Enable Settings Sync. Learn more for CLion or IntelliJ IDEA.

Preview for intention actions

Previously, when your IDE suggested intention actions – quick ways to apply simple modifications to your code – you had to first apply them before you would see what the resulting code looked like. Recently, the IntelliJ Platform helpfully added the ability to see the suggested changes in a special window before actually applying them.

On the face of it, this kind of preview only seems to reduce the number of additional actions required to apply intentions, but we think, there’s more to it than meets the eye. Intentions are at the heart of the code-editing process as we envision it, which can be summed up as coding at the speed of thought. The moment you conceive of how you want to change your code, the IDE is already offering you a helping hand in the form of an intention action. We trust that this new ability to preview the results of an intention action will help you get even more in tune with your IDE.

VCS

Few developers today could dispense with a particular type of VCS, which is why refining the interaction between a VCS and IDE is crucial. With that in mind, the IntelliJ Platform has added the following improvements:

  • The Cloning repository progress bar on the Welcome screen.
  • Annotate with Git Blame.
  • Git File History works in the new UI without an index.
  • IntelliJ-based IDEs provide you with Code Vision hints about code authorship based on the VCS history.
  • The Review list for GitHub and Space was redesigned to help reduce cognitive load and provide the most important information about requests.

Docker and Kubernetes

These two technologies have permeated every developer’s life, we’re doing our best to make sure that our IDEs provide the best developer experience possible for working with Docker and Kubernetes.

Docker

In addition to supporting Docker in WSL, the Colima runtime, and Rancher container management (which allows establishing more options for Docker daemon connections), we’ve enhanced the Docker UI with redesigned containers, images, networks, and volumes actions.

Docker Compose hasn’t been left unattended, either. New Docker Compose targets can be created and configured to run and debug applications in containers managed by Docker Compose.

Kubernetes

The Kubernetes plugin can now integrate with the Telepresence tool. Resources loaded from the cluster can now be modified from the editor tab, and the editor supports werf.yaml and related Helm template files.

Remote development

Modern problems require modern solutions, which is why JetBrains provides remote development capabilities for today’s interconnected remote world. Whether you want to utilize the full power of a remote machine, secure sensitive information on a remote server, reproduce a development environment or even streamline tech interviews, – all of these scenarios are now possible with our tools.

Here are some highlights of how the latest releases have improved the experience of engineers using the remote development tool set:

  • Improved debugging functionality and multiple actions for code examination when developing remotely.
  • New widget showing CPU load, memory, disk capacity, and other parameters.
  • New SSH key forwarding security setting that allows authenticating access to Git repositories from a remote machine without storing private keys on a remote server. 
  • The remote JetBrains Client now lets you edit more file types, such as PNG images, UML diagrams, Jupyter Notebook files, and Android layouts, on a par with text-based files.


Remote development integrations are now additionally available for:


If you use CLion, watch this webinar recording to learn more about the example remote development scenarios available.

Hidden improvements

While some enhancements are plain to see, others stay hidden even though they impact your everyday experience just as much, if not more. Here are just a few:

  • IDE startup and indexing have become faster and the UI more responsive, which altogether contributes to a faster and smoother experience.
  • Adopting the Vector API, which is designed to express vector computations that compile at runtime to vector instructions on supported CPU architectures, has allowed us to achieve performance superior to equivalent scalar computations.
  • Search Everywhere has been refined, making search results more predictable and accurate. The Files tab results in shorter search sessions are now ranked using machine learning, making the overall experience more pleasant.

Related products

Software development is much more than just writing code in your IDE and building your apps. It’s also about effective teamwork, constantly learning new things, and sharing knowledge with others. Here are a few of our other tools that can help you do that.

Educational tools

The EduTools plugin is a beautiful tool designed to help you master a new programming language, like Rust, or pass on your programming insights to Rustlings and other students.

With our recently announced new vision for JetBrains Academy, you can now learn Java, Kotlin, Scala, Python, Go, and other programming languages, improve your existing skills right in your IDE, or create your own educational courses and publish them on JetBrains Marketplace.

JetBrains Space

Space is an all-in-one solution for software teams that brings all the collaboration tools together into one place, including project management, issue tracking, Git hosting, code reviews, continuous integration, package repositories, remote backend orchestration, and even chats.

Over the past year, IntelliJ IDEA and CLion bundled integration with Space so engineers with Space access could enjoy an abundance of benefits right off the bat.

If you’re already using Space, here are some updates for you concerning all the IDEs:

  • It is now possible to orchestrate backends for your remote development process directly from any of the chosen IDEs. Learn more.
  • While reviewing code changes in the IDE, you can post a review comment right away or save it as a draft.

Conclusion

We hope you’ll find some of these changes and features worth trying and adopting. Your feedback is important to us! Let us know in the comments below or on Twitter what else you’d like to see added to JetBrains IDEs or other tools that could help you be more productive with Rust.

Your Rust team

JetBrains
The Drive to Develop

]]>
The State of Developer Ecosystem 2022 in Rust: Discover recent trends https://blog.jetbrains.com/rust/2023/01/18/rust-deveco-2022-discover-recent-trends/ Wed, 18 Jan 2023 16:08:12 +0000 https://blog.jetbrains.com/wp-content/uploads/2023/01/Blog_Social_share_image_1280x720-9.png https://blog.jetbrains.com/?post_type=rust&p=289883 The results of the Developer Ecosystem Survey 2022 are in!

Every year JetBrains conducts a survey to deepen our knowledge of the developer community and how it has changed over the past year. 

Let’s dive into the report’s findings with the help of three members of the Rust community:

Alexey Kladov

Aleksey Kladov

Member of Rust’s Dev tools team.

BlogGitHub

Andre Bogus

Clippy maintainer, TWiR editor, Rust contributor, professional Rustacean.

BlogGitHub

Florian Gilcher

Florian Gilcher

Managing Director at Ferrous Systems. 

GitHub | Twitter

Work or hobby? 

The share of developers using Rust for work grew from 16% in 2020 and 2021 to 18% in 2022.

Expert analysis

Florian: “I’ve noticed that although Rust is growing, the relative numbers here are staying the same. That’s good! It means the number of Rust hobbyists is increasing and they can turn professional at a good rate. Employers take note: If you look beyond people with ‘X years of professional Rust experience’, you’ll find a big hiring pool of people willing to switch from their current jobs.”

Aleksey: “Anecdotally, Rust did transition from ‘a weird new language’ to ‘it wouldn’t be insane to put this into production’ a couple of years ago, so growth here is expected, and very welcome.”

Experienced Rustaceans are hard to find

According to the survey, 24% of Rustaceans have used Rust for more than one year. This is an increase of 4 percentage points from last year, but it’s still not easy to find experienced Rust developers. 

Expert analysis

Andre: “People who have used Rust in the past are by and large still using it. The relative share of newcomers has been nearly constant, showing a healthy organic growth pattern. The share of senior Rustaceans has grown, which is good news for employers seeking them.”

Florian: “Rust is a young language, so it’s hard to find people with years of experience. For that reason, managers adopting Rust should budget for training and other forms of education and support for their teams. Also, consider that someone who has programmed for decades can adopt a new programming language rather quickly with help.”

Rust and other languages

JavaScript / TypeScript remains the most popular language used along with Rust, and its share is gradually increasing. 

Expert analysis

Florian: “I’m pleasantly surprised here – I expected the proportion of pure Rust projects to be a little lower. I’m not surprised by the JavaScript numbers; the communities are very close and get along well with each other.”

Andre: “Roughly half of the responders are polyglot programmers, using another language alongside Rust. It looks like the percentages roughly mirror current popularity. As they say, the perfect tool is often the one you already hold.”

Debugging 

The share of developers using println! or dbg! macros decreased from 60% to 55%, while the usage of UI debugging in IDEs increased from 23% to 27% since last year

Expert analysis

Andre: “More people use a debugger now, likely because support has improved since last year. The dbg! macro still unsurprisingly takes the cake, as a quick and easy way to get insight about the runtime state. And let’s not forget that, with Rust being as picky as it is, applications often don’t need debugging in the first place.”

Aleksey: “Debuggers are as much of a pain as ever. I myself use eprintln! (via pd snippet in my IDE), but I miss great debuggers from my time with Kotlin.”

Profiling

The most popular profiling tool used by Rust developers is perf, but the vast majority of Rusteceans – 82% – don’t use profiling tools at all. 

Expert analysis

Florian: “It’s amazing – but also not surprising – that in a language that many people use for speed, performance measurement isn’t a common practice. My theory is that performance tooling is inaccessible and differs based on platform.”

Aleksey: “That’s squarely the toolchain’s fault! All the tools listed here are impossibly fiddly to use. If you do profiling full-time (so, a perf engineer on a big project, a-la nnethercote), you can invest time and effort into learning all the perf flags. If, however, you want to spend only a fraction of time doing perf investigation, the learning curve is very unfriendly. I wish Rust took a page from Go’s book, which has pprof.StartCPUProfile as part of the toolchain. This is going to require a huge effort, though.”

Rust for CLI

Developers use Rust mostly for building CLI tools, systems programming, and web development. 

Expert analysis

Andre: “CLI tools have proven to be a niche area where Rust shines. Last year, almost half of developers were developing them. What’s also interesting is that, while blockchain companies often proudly boast that they use Rust, only 6% of respondents actually work in that space. This is either a case of outsized hype and marketing or the few Rustaceans who work in blockchains are very effective developers. Or both.”

Florian: “Given that the public perception of Rust is that there are a lot of Rust jobs in the blockchain industry, I’m quite surprised to see this option below even embedded and academic use.”

Target platform

Unsurprisingly, Linux takes first place as the primary Rust target platform. The integration of Rust into the Linux world continues as initial Rust support was merged into the Linux kernel in October.

Expert analysis

Andre: “Linux reigns supreme, though Windows has made headway. I think this may be due to Microsoft investing in Rust, combined with the fact that Linux users are often early adopters and the growing community now has more conservative users, who tend to use the OS their PC came with.”

Florian: “Another tiny surprise for me – I would have placed WebAssembly somewhere around embedded use. Once again, this shows how important polling is.”

Plugins

The share of developers using rust-analyzer has increased from 25% to 45%. IntelliJ Rust is used by 42% of developers, compared to 47% last year. 

Expert analysis

Aleksey: “Huge growth for rust-analyzer! Not surprising, given that the rust-analyzer project recently became a part of the wider Rust organization, and the Rust Language Server (RLS) has been deprecated in favor of rust-analyzer. I am personally quite happy that a lot of folks use advanced IDEs for Rust, and that there’s healthy competition and collaboration between IntelliJ Rust and rust-analyzer!”

Andre: “rust-analyzer has made major headway, now being the official LSP implementation for Rust. IntelliJ Rust has stayed very strong, too. Having worked with both, I still switch between them every now and then. Two fine pieces of engineering. Kudos!”

That’s it! Check out the full report and let us know what you think about the findings. You can ping us on Twitter or leave your comments here. Thanks for reading!

]]>
https://blog.jetbrains.com/zh-hans/rust/2023/01/18/rust-deveco-2022-discover-recent-trends/
IntelliJ Rust: Updates for 2022.3 https://blog.jetbrains.com/rust/2022/12/16/intellij-rust-updates-for-2022-3/ Fri, 16 Dec 2022 14:59:16 +0000 https://blog.jetbrains.com/wp-content/uploads/2022/12/Blog_Featured_image_1280x600-5.png https://blog.jetbrains.com/?post_type=rust&p=308471 In the 2022.3 release cycle we’ve enabled macro expansion for function-like and derive macros and build script evaluation by default. 

We’ve implemented code insight features like the intention action preview, among others. The Run/debug section includes various improvements, and we’ve changed how types are rendered in the Debug window. 

Additionally, there are a number of performance enhancements. 

Language Support

Macros expansion

Function-like and derive macros are now expanded by default

Items generated by procedural macros get syntax highlighting, completion, the Show Macro Expansion popup, and other features that were already available for items generated by declarative macros. Generated items are suggested in code completion and considered when using other code insight features.

That attribute macros expansion is still disabled by default. If you want to try it out, enable the org.rust.macros.proc.attr experimental feature.

Also, we fixed procedural macros expansion on the nightly Rust toolchain. 

You can read more about macros and how they are supported in IntelliJ Rust in this blog post

Build script evaluation

We’ve enabled build script evaluation by default. 

IntelliJ Rust now builds and executes all build scripts in the project – including build scripts in external dependencies – every time the project model is loaded.

A typical use case for this feature is to generate code using a build script and include it via include!(concat!(env!("OUT_DIR"), "/path_to_generated_file.rs"))

If you want to know more about build scripts and how to work with them in IntelliJ Rust, please refer to this blog post.

Support for intra-doc links

Code completion and other code insight features now work for intra-doc links. Some minor cases are not yet supported.

Support for let_chains

let_chains – a feature that extends if let and while let expressions with chaining – is now supported in IntelliJ Rust. 

Code insight

Intention action preview

A preview for intentions and quick-fixes is now supported. This feature is turned on by default in all JetBrains IDEs in v2022.3, and now it’s available for Rust.

Please note that some intentions and quick-fixes don’t currently have a preview. See this issue for details.

Rename items expanded from macros

The Rename refactoring now works for items expanded from macros. 

Improved completion and auto-import for procedural macros

We’ve improved completion for procedural macros inside attributes and added support for auto-importing them. 

Auto-import and completion now also work for custom derive macros. 

Inline type alias refactoring

There is now the Inline refactoring for type aliases.  

Completion for stdlib items inside scratches

Items from the standard library are now resolved in standalone files (files that don’t belong to any Cargo project), scratches, and injected Rust code. You can get highlighting, completion, and other code insight features for them.

Run/debug

Run doc tests from the gutter

The functionality for running documentation tests is now the same as for normal tests. For instance, you can run doc tests by pressing on the green arrow in the gutter.  

Show size and contents of slices

We’ve added GDB/LLDB pretty-printers, and you can see the contents and size of Rust slices in the Debug window.

Render range types

We’ve added summary pretty-printers for range types (e.g. core::ops::range::Range) to improve their rendering in the Debug window.

Improved rendering of type names in the debugger on Windows 

We’ve changed how type names are rendered in the debugger on Windows. 

Now the Rust type names that contain MSVC-specific wrappers are rendered in the Rust way. 

For instance,  tuple$<Foo, ref$<Bar>> is replaced by (Foo, &Bar).

This feature is enabled by default. To disable it, uncheck the Decorate MSVC type names checkbox in Settings | Build, Execution, Deployment | Debugger | Data Views | Rust settings.

We’ve enabled native Rust support for the MSVC LLDB debugger on Windows.

Native Rust support currently improves several things, the most visible of which is that the Rust primitive type names are displayed properly (e.g. u64 instead of unsigned long long).

Performance improvements

In this release cycle, we’ve implemented a number of performance enhancements to speed up name resolution and type inference. This resulted in improved performance for most code insight features, as they depend on name resolution and type inference. Code highlighting and completion benefited the most from these changes.

In some cases, highlighting may be twice as fast as before when analyzing a new file. We also made our cache system smarter, meaning highlighting may be three times faster than before when you make changes outside of functions.

The improvement should be particularly noticeable in files where nested paths like Foo<Foo<Foo<Foo<...>>>> are used. For example, usage of the typenum crate leads to large nested types, which are resolved much faster with the recent changes.

___

As always, a big “thank you” goes to the external contributors who helped us in this release cycle:

That’s it for the most recent updates in the IntelliJ Rust plugin! Tell us what you think about our new features by writing a comment here, pinging us on Twitter, or filing an issue in the plugin’s issue tracker. Thank you!

Your Rust team
JetBrains
The Drive to Develop

]]>
What Every Rust Developer Should Know About Macro Support in IDEs https://blog.jetbrains.com/rust/2022/12/05/what-every-rust-developer-should-know-about-macro-support-in-ides/ Mon, 05 Dec 2022 11:27:00 +0000 https://blog.jetbrains.com/wp-content/uploads/2022/12/image-10.png https://blog.jetbrains.com/?post_type=rust&p=303360 We use a lot of tools for software development. Compilers, linkers, package managers, code linters, and, of course, IDEs are essential parts of our work and life. There are areas where single-tool efforts are not enough to provide the best user experience. In Rust, macro support is definitely something we can’t wholly tackle without broad community understanding and collaborative effort.

We, the IntelliJ Rust plugin team, are now partially enabling support for procedural macros, specifically enabling function-like and derive procedural macro expansion by default while hiding support for attribute procedural macros behind the org.rust.macros.proc.attr experimental feature flag. While we mostly refer to the IntelliJ Rust plugin here, the same things apply to your favorite editor powered by rust-analyzer. In fact, we are very similar regarding macro support. Even more importantly, we face the same problems. This more technical blog post by Lukas Wirth explores the same area.

Let’s discuss several fundamental ideas regarding macros and their support in IDEs, including main ideas and approaches, good and bad parts, implementation details, and problems.

Learning about Rust macros and taking IDEs into account

Most Rust developers aren’t concerned about implementing macros, but we use them a lot. Macros simplify common operations (println! and vec!), reduce boilerplate code (like serde), provide additional features for our programs by generating a lot of code (clap or actix-web), or allow us to embed DSL code in our Rust programs (notably yew).

The Rust Programming Language book provides a short but accessible overview of macros, their flavors, and their kinds. It also gives you a glimpse of how they can be implemented. Unfortunately, The Book doesn’t go into detail about how macros actually work, although understanding this is crucial to get an idea of what’s going on in an IDE whenever we use them.

Also unfortunately, most available online educational resources about macros ignore the IDE experience. Understanding macros based on a compiler-only experience is a bit misleading. The goal of a compiler is simply to check your code and either report an error or build an executable/library. For the compiler, there is no difference if there is an error in a macro implementation or in the code where we call that macro. Macro authors usually share the same attitude towards code they’ve gotten from a user: macros either do their job or report an error regarding the input they received. If we add an IDE into this equation, things change a lot. Most of the time, an IDE deals with incorrect code. Although we expect IDEs to report errors, their primary goal is not to complain about the inability to do this and that! IDEs drive users to the correct code by staying alive in the presence of errors and suggesting fixes. The compiler is of almost no help to an IDE here, but we do expect some help from the macro implementor – more on this later.

Turning code into code with macro expansion

Macros take code as an argument and are able to add new code, replace a given code with anything else, or modify this code in any way imaginable. This is important! Whenever you provide some code as an argument to a macro, chances are you don’t know what is actually going to be compiled in the end because the code can be heavily modified. This input code doesn’t have to be valid Rust (although there are some technical restrictions), but the resulting code must be valid Rust.

The process of executing a macro is called macro expansion. Macros are expanded when our code is compiled. More interestingly, they should also be expanded when we write our code in an IDE. Why? Because an IDE should be aware of expanded code in order to provide us with reasonable navigation and code completion. For example, if a macro generates some new function, we should see it in the completion suggestions, although there may be no place in the code to navigate to. A user writes and sees code before macro expansion happens, but an IDE is expected to provide an experience as if it has already happened.

Look at the diagram below. Suppose a user expects some IDE services at points A and B in the source code. Point A is inside a macro call, and point B follows it. In order to provide these services, an IDE has to expand the macro first. Then, it analyzes the expanded code, comes up with the necessary information, and delivers it to the user.

If a macro fails to expand properly, an IDE is in trouble and its ability to deliver helpful information to a user is severely limited.

The two flavors of macros

Macros come in two flavors: declarative and procedural. These flavors mainly refer to the ways in which macros are implemented, not how they are used. They also differ in the ways the compiler and IDEs work with them. For declarative macros, the IntelliJ Rust plugin uses its own built-in expansion mechanism. For procedural macros, the story is much more complicated. In short, Rust libraries that provide procedural macros are compiled into dynamic libraries for a corresponding operating system. These libraries are then called by the compiler or IDE at the moment of macro expansion. While the compiler uses its own mechanism for calling these libraries, the macro invocations from IDEs are facilitated by proc_macro_srv, a server for procedural macro expansion. The following diagram outlines these macro expansion processes for both declarative and procedural macro calls.

The proc_macro_srv macro expander we use is a component developed by the Rust Analyzer team. Their implementation is based on code originally written by a student during an internship at JetBrains. The procedural macro expander has become so ubiquitous that it is now included in the compiler distribution itself and is shipped by rustup.

If you feel adventurous and are interested in the technical details behind procedural macros, you can read the corresponding series in our blog (part I, part II). You can also dive into this epic story by Amos, where he explains how proc_macro_srv made its way into the compiler distribution.

IDEs prefer declarative macros because the machinery to expand them is significantly simpler and usually more stable. Procedural macros demand much more care, and the machinery is more fragile.

Three kinds of procedural macros and how we deal with them

There are three kinds of procedural macros: derive, function-like, and attribute. These kinds of macros work mostly the same way in terms of macro expansion, but they are used for different things and put different limitations on the code used as an argument. We’ll use code from this repository to showcase the usage of different kinds of procedural macros. Feel free to open this repository in your IDE and explore IDE support on your own.

Derive macros

Derive macros expect valid struct, enum, or union declarations as input. They can’t modify their input in any way, and generate new code (mostly impl-blocks) based on their input. Derive macros are relatively straightforward in terms of IDE support. All of the newly generated code is included in code analysis, so the IDE takes it into account.

Let’s look at an example. Suppose we have the following trait:

trait Name {
   fn name() -> String;
}

We also have a macro that derives the name function for a struct or enum it is applied to:

#[proc_macro_derive(NameFn)]
pub fn derive_name_fn(items: TokenStream) -> TokenStream {
 
   fn ident_name(item: TokenTree) -> String {
       match item {
           TokenTree::Ident(i) => i.to_string(),
           _ => panic!("Not an ident")
       }
   }
 
   let item_name = ident_name(items.into_iter().nth(1).unwrap());
 
   format!("impl Name for {} {{
   fn name() -> String {{
       \"{}\".to_string()
   }}
}}", item_name, item_name).parse().unwrap()
}

In this macro, we take an identifier of an item it is applied to and emit the corresponding impl block. Note that the implementation is kept fragile for simplicity. For example, we assume that the item’s identifier comes right after the introducing keyword for the item (struct, enum, or union), so we skip the keyword and then take the next token.

With these definitions available, we can use them in our code as follows:

#[derive(NameFn)]
struct Info;
 
#[derive(NameFn)]
enum Bool3 { True, False, Unknown }

As you can see in the screenshots below, the IntelliJ Rust plugin now has all of the information it needs to assist us with using the name function:

Function-like macros

Function-like procedural macros are invoked using the ! operator. They expect any sequence of code fragments (called tokens) that can be grouped in brackets, braces, or parentheses, and they output valid Rust code built from the input. As a result of macro expansion, the call site is replaced with the output. Once again, the user sees a macro called in code, but the IDE should make it feel like the expanded code is in place.

As an example, we’ll implement a function-like macro that can be used as follows:

declare_variables! {
   a = 'a',
   b = 2,
   c = a,
   d, // will be defaulted to 0
   e = "e",
}

This macro provides a short syntax for declaring variables. Thanks to the “Show recursive macro expansion” context action, we can see the result of expansion:

Every item in the shortened declaration list gets expanded into a full Rust variable declaration. There are no declarations of these variables in the source code, but the IntelliJ Rust plugin is smart enough to provide us with code completion and navigation, as seen in the screenshots below:

This IDE behavior is not a given. If we look at the macro implementation, we’ll see that original input tokens make their way to the output let declarations:

let variable = decl.next().unwrap();
// ...
tokens.extend([// construct a new let-declaration
   Ident::new("let", Span::mixed_site()).into(),
   variable, // comes from input
   Punct::new('=', Spacing::Alone).into(),
   value,
   Punct::new(';', Spacing::Alone).into()
]);

If we attempt to construct let declarations from a String literal as in the derive macro above, there would be no such connection between a variable declaration site and its usages. As we mentioned previously, an IDE’s abilities depend greatly on the macro implementation.

Attribute macros

Attribute macros are attached to existing code items which must be valid Rust code. Attribute macros can play with their input in any way they want, and their output fully replaces the input. Consider the following as a counterintuitive example of an attribute macro, but keep in mind that attribute macro support in the IntelliJ Rust plugin is not yet enabled by default:

#[proc_macro_attribute]
pub fn clear(_attr: TokenStream, _item: TokenStream) -> TokenStream {
   TokenStream::new()
}

This macro removes a code item it is applied to. Considering this, what do you think about the following IDE suggestions?

If we apply the first one, we would expect no problems – the report_existence function is eradicated during compilation, along with the call to a no-more-in-existence function. On the other hand, applying the second suggestion would lead to a compilation error:

error[E0425]: cannot find function `report_existence` in this scope
  --> demo/src/main.rs:20:5
   |
20 |     report_existence();
   |     ^^^^^^^^^^^^^^^^ not found in this scope

If we have an external linter enabled, we would know that in advance, but it would be better not to suggest this in the first place. In fact, we can avoid these issues with the org.rust.macros.proc.attr experimental feature flag. If we enable it, there will be no such suggestion:

Does it make sense to suggest anything at this point? In these circumstances, we provide a generic list of suggestions as if there was no attribute at all. We believe that these suggestions can be useful in dealing with erroneous situations around macro invocations.

We still have some work to do regarding our support for attribute procedural macros. For example, we have to make sure that enabling it doesn’t break the user experience. The expansion of attribute macros may create tension between a user who types some code and an IDE pretending something totally different (the result of macro expansion) is in place. This is an important difference from function-like macros. If users type some arbitrary tokens inside a function-like macro call site, their expectations regarding IDE support are significantly lower compared to attribute macro inputs, which have to be valid Rust. Ultimately, it’s user expectations that drive users towards public complaints and issues in issue trackers! Understanding the reasons behind awkward IDE behavior in the presence of procedural macro invocations should help.

Now, let’s send a message to our fellow macro implementers.

What every Rust macro implementor should take into account

We don’t aim to provide a complete guide for macro implementers. Instead, we’d like to bring your attention to several issues that could help IDEs if addressed adequately by those who write their own macros.

Try to write a declarative macro if possible. IDE machinery is much easier and more efficient when it comes to declarative macro expansion.

If you still want to write a procedural macro, avoid stateful computations and I/O. It’s never guaranteed when and how many times macro expansion is going to be invoked. Connecting to a database every time a user waits for completion suggestions seems unnecessary.

Procedural macros process tokens that come in TokenStreams. Every input token bears its location in the source code known as a span. IDEs can use these spans for syntax highlighting, code navigation, and error reporting. If you want an IDE to be able to navigate to something generated from those tokens, reuse those spans. The more input reused in output, the better. This allows the IDE to provide better user assistance.

What if macro input is malformed? The Rust Reference clearly states that:

Procedural macros have two ways of reporting errors. The first is to panic. The second is to emit a compile_error macro invocation.

Note that the compiler-centric approach used here is definitely not IDE-friendly, given that IDE has to expand macros on-the-fly as users type their code. Why do IDEs have to be so quick in expanding macros? Because expanding macros gives much more information about the context that can be used for code completion suggestions and navigation. An IDE (and the user) needs that information.

The critical point is that an IDE expands macros not to compile or run code, but to get information. If a macro invocation results in error, this information is lost. But is there an alternative? If there is an error in the input, there should be an error in the macro invocation! It also seems like a mistake to demand macro authors rewrite their parsers so that they accept malformed syntax, recover from errors, and try to come up with suggestions on how to fix them. Writing such parsers can be quite challenging. Neither macro implementers nor the proc_macro ecosystem seems ready for that. We also are not sure that they should, even if they could.

We believe that there is an easier way. Both IntelliJ Rust and Rust Analyzer invoke macro expansion for computing completion suggestions. They mark the caret position (an expected insertion point) with a dummy identifier to be able to compute code completion suggestions around it. We also make it clear for macro implementers that the invocation is done for completion purposes. Such invocations can be thought of as a side channel between the IDE and macro implementers that can be used to deliver valuable information (about the macro inputs, for example) to expose potentially available names or expected grammar.

Look at this pull request to the yew library aimed to improve the user experience with the completion of component names inside an html! macro invocation. While being more of a proof of concept, it should encourage macro implementers to care more about the user experience in IDEs without much hassle.

Let’s make writing Rust code easier together! And let’s continue closing those annoying issues and implementing new exciting features.

Your IntelliJ Rust plugin team

]]>
Rust developers survey https://blog.jetbrains.com/rust/2022/11/28/rust-developers-survey/ Mon, 28 Nov 2022 13:23:39 +0000 https://blog.jetbrains.com/wp-content/uploads/2022/11/DSGN-15127_Rust_Dev_Survey_Blog_Featured_image_1280x600.png https://blog.jetbrains.com/?post_type=rust&p=301099 Hello,

We are interested in learning how exactly Rust and C/C++ ecosystems coexist. That’s why we’d like to learn from Rust developers about their experience and best practices with C and C++ code in their Rust code base. We would appreciate it if you could take 5 minutes to fill out a survey.

The data will be used internally to further develop JetBrains Rust tools and their positioning in the ecosystem. We might publish some anonymized summaries later on our website, blog, twitter, or other outlets.

SHARE EXPERIENCE

We’ll hold a raffle where selected participants will receive Amazon gift cards or 1-year personal licenses for the JetBrains All Products Pack.

Your Rust team
JetBrains
The Drive to Develop

]]>
Evaluating Build Scripts in the IntelliJ Rust Plugin https://blog.jetbrains.com/rust/2022/10/24/evaluating-build-scripts-in-the-intellij-rust-plugin/ Mon, 24 Oct 2022 09:54:04 +0000 https://blog.jetbrains.com/wp-content/uploads/2022/10/build-script-evaluation.png https://blog.jetbrains.com/?post_type=rust&p=288137 Build scripts is a Cargo feature that allows executing any code prior to building a package. We implemented support for build scripts evaluation in the IntelliJ Rust plugin a long time ago, but up until now, we hid it under the org.rust.cargo.evaluate.build.scripts experimental feature. As we are now enabling this feature by default, we’ve decided to explain what it means for our users.

What is a build script, and why should we care?

Most Rust projects use build scripts to deal with native dependencies, configure platform-specific options, or generate some source code. Let’s recall the build script functionality in Cargo. Whenever you put the build.rs file into the root folder of your package (the actual path can be configured), Cargo compiles it (and its dependencies, if any) to an executable and runs that executable before trying to build the package itself. Build script behavior can be configured via environment variables. Build scripts communicate with Cargo by printing lines prefixed with cargo:, thus influencing the rest of the building process.

Some bits of the build script functionality affect the IDE user experience. The code generated during build script evaluation becomes an essential part of the codebase and should be treated as such. The user should be able to explore that code, go to the generated definitions, and see them in code completion suggestions.

For Cargo, evaluating build scripts is just an early step in building the whole package. Once the build script is evaluated, Cargo proceeds with building everything else. If this process fails at any stage, be it either build script evaluation or source code compilation, Cargo will report an error.

For an IDE, though, the overall process is a little bit different. Whenever a user opens a project, the IDE should provide an environment in which the build script is already evaluated, but the package is not built yet. The source code of the package may have compilation errors, or its dependencies may not be ready for you to work with. If the IDE failed to open a project because it had some issues, that would mean we failed to make a good IDE! Of course we can’t have that, so we must make sure the IDE is able to open the project, and our user has a chance to fix all of the issues and make the project ready for building.

The IntelliJ Rust plugin evaluates build scripts every time the project model is loaded (such as when the Cargo project is opened or refreshed). Unfortunately, it’s not enough for the plugin to analyze a build script statically, because it may contain arbitrary code. We have to compile and run it. Moreover, it’s not enough to run it as a standalone program, because its output should be processed by Cargo. Yet another problem is that there is no way to stop Cargo after evaluating a build script.

To alleviate all these difficulties, the plugin runs cargo check on a package but supplies a specially crafted rustc-wrapper that only runs build scripts and builds proc-macro library crates, skipping the rest of the compiling package’s source code. Once a build script is built to a binary and evaluated, Cargo emits a set of configured parameters, some of which are used in the IDE. We also look for the generated source code files and include them in the regular code analysis process.

Example: generating code in build scripts

Let’s look at this small project with a custom build script that generates some code and see how the IntelliJ Rust plugin deals with it. The project has the following structure:

.
├── Cargo.toml
├── build.rs
└── src
    └── main.rs

We want to generate the say_hello function in build.rs and call it later from main.rs. Let’s look at the different components of the solution.

Implementing a build script

Suppose we’ve got the code variable with the desired content, for example:

pub mod generated {
   pub fn say_hello() {
       println!(r#" _______________
< Hello, world! >
---------------
       \   ^__^
        \  (oo)\_______
           (__)\       )\/\
               ||----w |
               ||     ||
"#)
   }
}

This particular content was created with the help of the beautiful cowsay command-line utility. We use it during the execution of build.rs if it is installed in the system. If not, the text printed is a little bit more formal.

Code generation is as easy as writing to the text file:

let out_dir = env::var_os("OUT_DIR").unwrap();
let dest_path = Path::new(&out_dir).join("generated.rs");
fs::write(&dest_path, code).unwrap();

First, we consult the OUT_DIR variable from the environment. This is the only directory a build script is supposed to write. It is configured by Cargo itself before executing the build script. Then we create the file and write its content.

If cowsay is available, we also want to generate an additional configuration option that will be available later during the regular package build:

println!("cargo:rustc-cfg=cowsay");

Finally, we instruct Cargo to rerun build.rs only if the build script code is changed:

println!("cargo:rerun-if-changed=build.rs");

Once the code is generated, we are happy with the result. Otherwise, Cargo would execute it on any package file change.

Specifying build script dependencies in Cargo.toml

In order to use external crates in a build script, we should mention them in the dedicated section of the Cargo.toml file as follows:

[build-dependencies]
which = "4.3"

Here we use the which crate to check whether we have cowsay available in the system. This crate won’t be included in the application binary unless it is also specified in the dependencies section of Cargo.toml. In this project, it’s used exclusively in the build script.

Including generated code in the main module

Our package’s main file includes the content of the generated file, defines the auxiliary function depending on the conditional option configured, and calls both the auxiliary and the generated function:

include!(concat!(env!("OUT_DIR"), "/generated.rs"));

#[cfg(cowsay)]
fn print_warning() {
   println!("Beware of cows!")
}

#[cfg(not(cowsay))]
fn print_warning() {}

fn main() {
   print_warning();
   generated::say_hello();
}

Every time the cowsay conditional option is set by Cargo, we get a special warning about cows blocking the road before saying hello.

Exploring the project in the IDE

If the IntelliJ Rust plugin executes the build script, it knows precisely the values of conditional options configured, where to look for the generated code, and how to navigate to it whenever asked to:

As you can see from the screencast above, cowsay is, in fact, installed in the machine used for the demonstration. Running Docker Run configurations Without-cowsay and With-cowsay available in the repository demonstrate the results of building the project in different environments:

Editing build scripts in the IDE

The Rust plugin detects changes to the package’s build script, because they may affect the project model. The plugin can either reload the project automatically or notify the user as follows:

The warnings and errors collected during the build script evaluation are shown in the Build/Sync tool window:

Support for build script evaluation is still a work in progress. Stay tuned for more new features and improvements!

]]>
IntelliJ Rust: Updates For the 2022.2 Release Cycle https://blog.jetbrains.com/rust/2022/08/03/intellij-rust-updates-for-the-2022-2-release-cycle/ Wed, 03 Aug 2022 13:36:55 +0000 https://blog.jetbrains.com/wp-content/uploads/2022/08/Blog_Featured_image_1280x600-1-2.png https://blog.jetbrains.com/?post_type=rust&p=271466 In this release cycle, we’ve enabled a new approach for detecting changes in configuration files, as well as a new way to reload project models. We’ve improved performance and implemented various type inference improvements.

IntelliJ Rust now highlights outdated or missing dependencies in Cargo.toml. The plugin can convert JSON to structs via copy-paste.

There are plenty of other features and improvements – to learn about them all, read the detailed description below.

Language Support

Project model reloading

In v2022.2, we’ve improved the way IntelliJ Rust updates the project model.

The plugin now detects changes in config files even if they are not saved to disk. This change should make project model reloading more predictable.

IntelliJ Rust now also takes into account changes in Cargo config, the toolchain file and build scripts.

After you’ve changed configuration files, you will now see the floating Load Cargo Changes button. Click on the button, and the IDE will load changes to make your project work correctly.

You can change the settings for project model reloading in Preferences / Settings | Build, Execution, Deployment | Build Tools.

By default, the Reload project after changes in the build scripts checkbox is ticked, and the External changes option is selected. This means that the project model will be reloaded automatically only for external changes (for example, when you get updated files from version control). For any changes made in the IDE, you’ll be offered the Load Cargo Changes button, which allows you to load changes manually.

If you select the Any changes option, the project model will be updated automatically for all changes.

Note that Cargo settings have been moved to Preferences / Settings | Build, Execution, Deployment | Build Tools | Cargo for consistency with other settings for build tools.

Performance improvements for macro calls

We stopped doing some unnecessary cache invalidations. As a result, when you type in macro calls, completion and highlighting should now work faster.

Type inference improvements

In this release cycle, we’ve implemented various fixes and improvements for type inference:

  • We’ve implemented unsized coercions in the type inference engine. This fixes false-positive errors like type mismatch between Box<[u8]> and Box<[u8; 4]>.
  • We’ve fixed the inference of closure parameter types when closure is assigned to a variable and parameter types are inferred after the assignment.
  • We’ve fixed the usage of the ? operator with the Try trait. The unstable Try trait was moved to core::ops::try_trait::Try and its associated types were renamed. This has fixed a number of issues. For instance, now the ? operator works for Poll<Result>.
  • Looping over a type parameter implementing the Iterator trait has been fixed and now works as expected.
  • The recently added unstable Destruct trait is now derived for all types.
  • The plugin now considers negative impl when inferring types.
  • Type inference should now work faster thanks to changes in how common type inference cases are handled.

Compiler errors detection

IntelliJ Rust now detects more compiler errors:

  • An attempt was made to import an item whereas an extern crate with this name has already been imported (E0254).
  • The name chosen for an external crate conflicts with another external crate that has been imported into the current module (E0259).
  • The name for an item declaration conflicts with an external crate’s name (E0260).
  • The self keyword cannot appear alone as the last segment in a use declaration (E0429).
  • The self import appears more than once in the list (E0430).
  • An invalid self import was made (E0431).
  • Visibility is restricted to a module which isn’t an ancestor of the current item (E0742). 

Also, detection of duplicate definitions has been improved.

Support for #![recursion_limit] in name resolution

IntelliJ Rust now takes the #![recursion_limit] attribute into account, which controls macro expansion depth. Previously, the plugin used the default value for the recursion limit, which was 128 steps. But some macros require more steps, and this fix allows expanding them.

If you don’t need macros to expand fully, you can adjust the Maximum recursion limit for macro expansion setting.

Code insight

Inspections for dependencies in Cargo.toml

We’ve enabled by default two inspections for dependencies in Cargo.toml: Invalid crate version and New crate version available.

The Invalid crate version inspection detects dependency crates with invalid versions in Cargo.toml.

The New crate version available inspection informs you that a newer version of a crate is available. There’s also a quick-fix to update the crate version that is available from the inspection popup. You can see all available quick-fixes by pressing ⌥↩ (Alt+Enter).

You can control the inspection settings in Preferences / Settings | Editor | Inspections | Rust. Here you can disable inspections, change the severity level for a particular inspection (displayed by icons in the IDE), or choose how the relevant code is highlighted in the editor.

Inspections for unused_must_use and clippy::double_must_use

We now have inspections and quick-fixes for:

  • the unused_must_use lint that detects unused results of a type flagged as #[must_use]
  • the clippy::double_must_use lint that checks for a #[must_use] attribute without further information on functions and methods that return a type already marked as #[must_use].

Convert JSON to Rust types via copy-paste

When you copy JSON data and paste it in the editor, the IDE suggests converting it to the struct type. All of the necessary struct field tags are generated and added automatically.

We’d like to thank Jakub Beránek, who implemented the JSON-to-struct conversion feature during the past few releases.

Rename refactoring for metavariables in macros

The Rename refactoring now works for metavariables in macros.

Highlight URLs in string literals

URLs in string literals are now highlighted when you hover over them, and you can open them in a browser. To open a link press (Ctrl) and click a link.

Run/Debug

If the Emulate terminal in output console option is enabled, the proper terminal is now used in the Run tab. This option also now works on Windows.

You can enable the Emulate terminal in output console option in the Run configuration settings.

___

As usual, a big thank you goes to the external contributors who have helped us in this release cycle:

That’s it for the most recent updates in the IntelliJ Rust plugin. Tell us what you think about our new features! Write a comment here, ping us on Twitter, or file an issue in the plugin’s issue tracker. Thank you!

Your Rust team

JetBrains

The Drive to Develop

]]>
Procedural macros under the hood: Part II https://blog.jetbrains.com/rust/2022/07/07/procedural-macros-under-the-hood-part-ii/ Thu, 07 Jul 2022 13:49:48 +0000 https://blog.jetbrains.com/wp-content/uploads/2022/07/proc-macros4.png https://blog.jetbrains.com/?post_type=rust&p=259927 In our previous blog post, we discussed the essence of Rust’s procedural macros. Now we invite you to dive deeper into how they are processed by the compiler and the IDE.

Compilation of procedural macros

To start, let’s see how we might write a ‘Hello, world’ program using a separate crate inside a procedural macro:

In order to build this project, Cargo performs 2 calls to rustc

cargo build -vv

rustc
 --crate-name my_proc_macro
 --crate-type proc-macro
 --out-dir ./target/debug/deps
 my-proc-macro/src/lib.rs

rustc
 --crate-name my-hello-world
 --crate-type bin
 --out-dir ./target/debug/deps
 --extern my_proc_macro=./target/debug/deps/libmy_proc_macro-25f996c8b2180912.so
 src/main.rs

During the first call to rustc, Cargo passes the procedural macro source with the --crate-type proc-macro flag. As a result, the compiler creates a dynamic library, which will be then passed to the second call along with the initial ‘Hello, world’ source to build the actual binary. You can see that in the last line of the second call:

--extern my_proc_macro=./target/debug/deps/libmy_proc_macro-25f996c8b2180912.so
 src/main.rs

This is how the process can be illustrated. 

Here’s the first call to rustc:  

And here’s the second call: 

First call to rustc: dynamic library

The intermediary dynamic library includes a special symbol,  __rustc_proc_macro_decls_***__, which contains an array of macros declared in the project. 

This is what a procedural macro’s code could look like after certain changes that rustc makes during compiling:    

extern crate proc_macro;
use proc_macro::TokenStream;

// #[proc_macro]
pub fn foo(body: TokenStream) -> TokenStream { body }

use proc_macro::bridge::client::ProcMacro;
#[no_mangle]
static __rustc_proc_macro_decls_4bd76f2d7cc55ae0__: &[ProcMacro] = &[
   ProcMacro::bang("foo", crate::foo)
];

The ProcMacro array includes the information on macro types and names, as well as references to procedural macro functions (crate:foo).

The __rustc_proc_macro_decls_***__  symbol is exported from the dynamic library, and rustc finds it during the second call.

readelf --dyn-syms --wide ./target/debug/deps/libmy_proc_macro-25f996c8b2180912.so 
 ...
 1020: 00000000001a29a0    16 OBJECT  GLOBAL DEFAULT   22 __rustc_proc_macro_decls_4bd76f2d7cc55ae0__
 …

Second call to rustc: ABI

During the second call, rustc finds the __rustc_proc_macro_decls_***__ symbol and recognizes a function reference there. 

At this point, you might expect the compiler to command the dynamic library to expand the macro using the given TokenStream:

However, this can’t be done due to the fact that Rust doesn’t have a stable ABI. An ABI – Application Binary Interface – includes calling conventions (like the order in which structure fields are placed in memory). In order for a function to be called from a dynamic library, its ABI should be known to the compiler.

Rust’s ABI is not fixed, meaning that it can change with each new compiler version. So there are two requirements for the dynamic library’s ABI and the program’s ABI to match:

1) Both the library and the program should be compiled using the same compiler version.

2) The codegen backend of the compiler should also be the same.

As you might know, rustc is compiled by itself. When a new compiler version is released, it is first compiled by the previous version, and then compiled by itself. Similarly, in the case of our procedural macro library, it is compiled by the compiler, which it is then linked into. So it seems that rustc and procedural macros are compiled using the same compiler version, and their ABI’s should match. But here comes the second part of the equation – the codegen backend.

Rustc codegen backend and proc macros

This is where it’s helpful to take a look at how things used to work. Prior to 2018, Rust’s ABI had been used for procedural macros, but then it was decided that the compiler’s backend should be modifiable. Today, although the default rustc backend is LLVM-based, there are alternative builds like Cranelift or others – with a GCC backend. 

How is it possible to add more backends to the compiler while simultaneously keeping procedural macros working? 

There are two solutions that seem to be obvious, but they have their flaws:

  • C ABI

In the case of C ABI, a function should be prepended with extern “C”:

extern “C” fn foo() { ... }  

This kind of function can take C types or the types declared with the repr(C) attribute:

#[repr(C)] struct Foo { ... }

  • Full serialization of macro-related types

In this case, macro-related types like TokenStream would be serialized and then passed to the dynamic library. The macro would de-serialize them back to their inner types, call the function, and then serialize the results to pass them back to the compiler:

But it is not that simple. The correct solution came in pull request #49219, called “Decouple proc_macro from the rest of the compiler”. The actual process adopted in rustc is as follows:  

  1. Before rustc calls a procedural macro, it puts the TokenStream into the table of handles with a specific id. Then the procedural macro is called with that id instead of a whole data structure. 
  2. On the dynamic library’s side, the id is wrapped into the library’s inner TokenStream type. Then, when the method of that TokenStream is called, that call is passed back to rustc with the id and the method name.
  3. Using the id, rustc takes the original TokenStream from the table of handles and calls the method on it. The result goes to the table of handles and gets the id that is used for further operations on that result, and so on.

How is this approach better than the simpler version with full serialization of all structures or C ABI?

  • Spans link a lot of compiler inner types, which are better left unexposed. Also, implementation of serialization for all of those inner types would be expensive.
  • This approach allows backcalls (the procedural macro might want to ask the compiler for some additional action).
  • With this approach, macros can (in theory) be extracted into a separate process or be executed on a virtual machine. 

Proc macros and potentially dangerous code

As we saw, a procedural macro is essentially a dynamic library linked to the compiler. That library can execute arbitrary – and potentially dangerous – code. For example, that code could segfault, causing the compiler to segfault too, or call system fork and duplicate the compiler.

Aside from this apparently dangerous behavior, procedural macros can also make other system calls, such as accessing a file system or the web. These calls are not necessarily unsafe, but might not be good practice in the first place. An illustrative example here is procedural macros that make SQL requests, often accessing databases at compile time.

Procedural macros in IDEs

In order for the IDE to analyze procedural macros on the fly, it needs to have the code of expansion on hand all the time. 

In the case of declarative macros, the IDE can expand macros by itself. But in the case of procedural macros, it needs to load the dynamic library and perform the actual macro calls.

One approach might be to simply replace the compiler with the IDE, keeping the same workflow as described above. However, bear in mind that a procedural macro can segfault (and cause the IDE to segfault) or, for example, occupy a lot of memory, and the IDE will fail with out-of-memory errors. For these reasons, procedural macro expansion must be extracted into another process separate from the IDE.

That process is called the expander. It links the dynamic library and uses the same interface as the compiler to interact with procedural macros. Communication between the IDE and the expander is performed using full data serialization.

The expander is implemented similarly in both Rust Analyzer and IntelliJ Rust.

________________________________

We hope this article has helped you learn more about procedural macros compilation and the way procedural macros are treated by the IDE. If you have any questions, please ask them in the comments below or ping us on Twitter

Your Rust team

JetBrains

The Drive to Develop

]]>
What’s New in IntelliJ Rust for 2022.1 https://blog.jetbrains.com/rust/2022/05/19/what-s-new-in-intellij-rust-for-2022-1/ Thu, 19 May 2022 13:29:26 +0000 https://blog.jetbrains.com/wp-content/uploads/2022/05/Blog_Featured_image_1280x600-8.png https://blog.jetbrains.com/?post_type=rust&p=246967 In this release cycle, we’ve improved our language support, code insight, run/debug support, and tool integrations, and added many enhancements and fixes. 

Language Support

Single-step macro expansion 

We’ve significantly improved how macro expansion works.

Previously the macro expansion process was implemented in multiple steps – up to 64. The indexing would start after every step. This approach was not ideal. First, the macro expansion was slower than it could be because the cache memory would be cleared out at the end of indexing. And second, multiple steps caused a blinking UI.

We decided to redesign macro expansion in IntelliJ Rust some time ago, but this required some major architectural changes. First of all, we had to update the name resolution engine.

The work on updating the name resolution engine started at the end of 2020. We’ve been introducing the new engine gradually. For some time it worked alongside the old one, because in some cases the old engine worked better.

Now we’re happy to say the name resolution engine 2.0 is mature enough. We’ve been able to redesign the macro expansion engine, collapsing the expansion process into a single step. This has fixed the blinking during the expansion process and significantly sped up the macro expansion itself.

The plugin still has a limit of 64 steps, so macros that require more steps will not be expanded fully. We’ll keep working to improve the macro expansion process.

Name resolution improvements

Our new name resolution engine, which is enabled by default, now resolves macros 2.0 declared inside function bodies.

We also have a couple of improvements for macros in documentation tests:

  • Name resolution for macros 2.0 in doctests has been fixed.
  • The plugin now properly expands and resolves attribute procedural macros in doctests.

Thanks to improved name resolution, the plugin is able to provide better completion, quick documentation, and other code insight features.

You can find more information about the new name resolution engine in these blog posts:

Support for inline_const, inline_const_pat and ~const

IntelliJ Rust now properly parses the inline_const and inline_const_pat syntax that allows you to use inline constant expressions. The features are annotated as experimental, meaning you can’t use them with the stable toolchain.

The tilde const (~const) syntax is now supported, too.

Code insight

The Extract trait refactoring

With this new refactoring, you can quickly extract members of impl blocks into a trait and undo the changes with a single keystroke.

To refactor your code, place the caret on an item and go to Refactor | Refactor this | Extract trait. Or right-click on an item and go to Refactor | Extrait trait. The Refactor this menu is also available via the shortcut ⌃T (Ctrl+Alt+Shift+T).

The same refactoring will help you to create new traits out of existing ones.

Improved ML completion

We’ve updated the machine learning completion model to improve sorting of completion suggestions. The updated model:

  • prefers expressions of type which match the expected type.
  • differentiates between inherent, trait and blanket implementations.
  • recognises async, const and unsafe contexts.  

Remember, the data for ML completion is gathered anonymously during the IDE’s Early Access Program and your source code is never collected – only information about your interactions with the code completion UI.

You can learn which elements were reordered by the ML algorithm (note the green upward and red downward arrows). For this, tick the Mark position changes in the completion popup checkbox in Settings / Preferences | Editor | General | Code Completion. In this section you can also disable ML completion, if you prefer not to use it.

The Unnecessarily qualified path inspection

The plugin now has an Unnecessarily qualified path inspection that corresponds to the unused_qualifications lint. The new inspection detects unnecessarily qualified names, so if an item from another module has been brought into scope, you don’t have to qualify it again.

If you hover over the grayed-out item, you will get a popup window with a quick-fix to Remove unnecessary path prefix.

Run/Debug

Run targets

We’ve added initial support for Run targets. With this feature, you can build and run your code in an environment your app is intended for – in Docker containers, in WSL, or on remote machines via SSH.

To add a new target, click Run | Manage targets. Alternatively, click Add configuration or Edit configurations if you already have some, and then click Manage targets. You can also select the target from the dropdown menu.

The Run targets feature is available in CLion, IntelliJ IDEA Ultimate, and GoLand for now. Note that remote debugging using Run targets is currently unavailable. Plus, there are some other limitations. Please see the corresponding ticket for more details.

File links for dbg! macro output in the terminal

If you use dbg! macros for debugging, you can now jump from the terminal to the lines with dbg! usages in your Rust files.

Completion for private items in the debugger

Private items are now suggested when you type code in the debugger’s Evaluate expression bar or add a new watch. Previously, private items were filtered out. But since the debugger has access to them, private items should also be suggested in autocompletion. Hopefully, this will improve the debugging workflow.

Profiler in WSL 2

In the last release cycle, we enabled WSL 2 support by default. Now we’ve added support for the profiler on WSL toolchains.

When working with WSL, make sure to set the WSL toolchain location in Settings / Preferences | Languages & Frameworks | Rust.

You can learn more about setting up the profiler in WSL 2 on this page. The profiler is available only in CLion.

Valgrind memcheck in WSL

You can now use Valgrind memcheck on Windows via WSL. Like the profiler, Valgrind works in CLion only.

Tools integration

Custom parameters for rustfmt

You can now use rustfmt from a non-default toolchain and configure additional arguments and environment variables. For this, go to Settings / Preferences | Languages & Frameworks | Rust | Rustfmt.

External linter widget

IntelliJ Rust now has a widget for external linters. You can see it in the status bar on the right. The widget informs you about a linter being turned on or off. You will also receive a warning when a linter affects the performance of the IDE. Click on the widget to see the settings for external linters.

___

As usual, a big thank you goes to the external contributors who have helped us in this release cycle:

That’s it for the most recent updates in the IntelliJ Rust plugin. Tell us what you think about our new features! Write a comment here, ping us on Twitter, or file an issue in the plugin’s issue tracker. Thank you!

Your Rust team

JetBrains

The Drive to Develop

]]>