Setting Up Vim with an LSP for Scala

I write Scala a bunch these days at work. The language (or maybe the Spark culture) really wants you to use an IDE. As much as I tried to use IntelliJ I just can’t bring myself to make such a big switch. If you are interested in language servers (which typically provide autocomplete,) Vim, or specifically setting that up for Scala, read on.

This article is not as organized or clear as I normally strive for. When I started it I had less planned but after writing each section I ended up learning some additional major details. I learned so much that it’s hard to organize into a clear document; it may have been better to split this up into three or four separate posts, but I just don’t want to. I hope you find it helpful nonetheless.

🔗 TL;DR

🔗 The Goal

You can do a lot with an LSP, but all I really want is:

  • goto definition
  • autocomplete methods

All the rest I’ll figure out later (or never.)

🔗 Metals, but which Plugin?

The de facto LSP for scala is called metals. Their webpage links to nvim-metals, but that’s NeoVim only and I use Vim Classic.

The easiest way to install metals is via coursier. Install coursier, and then run this to install metals: cs install metals.

The same site links to coc-metals. coc-metals does work but it’s unmaintained and depends on using a bunch of Node stuff. If you are OK with those caveats it was definitely easy to get working, but I don’t love having yet another runtime for this.

Setting those aside, here are the other options I’ve found and can discuss:

ALE has code to detect different LSPs (and non-LSPs), vim-lsp leaves that to the user, but vim-lsp-settings does autoconfiguration for you. vim-lsc leaves it up to the user also.

I’ll start off discussing ALE, but put some example configs of the other two in an appendix.

In case you need it, here are all the initialization options for Metals.

🔗 bloop

Sadly, we are getting ahead of ourselves. At ZipRecruiter we use Gradle to build our JVM projects. Metals requires the use of bloop to integrate with gradle. Honestly, if you aren’t already using bloop you’ll be glad to find this, since it makes builds so much faster.

Building with straight gradle and a hot cache takes about 9 seconds for me, but building with a hot cache with bloop takes about 200 milliseconds. Awesome.

First get the bloop runner command, again via coursier, with cs install bloop. You won’t need this for the LSP integration but being able to use the bloop CLI to quickly trigger a build or a test is really useful.

Add this to your build.gradle (you might need to tweak it slightly, we had to modify it so it would work with artifactory:)

buildscript {
    dependencies {
        classpath 'ch.epfl.scala:gradle-bloop_2.12:1.5.0'
    }
}

allprojects {
  apply plugin: 'bloop'
}

Then run ./gradlew bloopInstall to download all the packages. (metals should be able to do the above two steps for you, but the bloopInstall step takes long enough that I’d rather run it explicitly and wait for completion.)

You should be able to run bloop projects and see your project (or maybe it and it’s tests) in the output:

$ bloop projects
datalake--onedaydataset
datalake--onedaydataset-integrationTest

And then trigger compilation with: bloop compile datalake--onedaydataset.

It should take as long as gradle the first time, but the next time you run the same command it should be really fast.

In a perfect world at this point we could validate that metals works without any kind of editor, but I haven’t found clear instructions or tooling for that. If you know how let me know and I’ll add it! What I’d like is a shell script that just fires up metals (or any LSP), initializes it, and logs output or something. I tried running metals by hand and writing the initialization json to it but nothing happened.

When we configure plugins later on we’ll pass the isHttpEnabled initialization option to metals. This will enable an HTTP server (on my computer at http://127.0.0.1:5031/) which gives us a lot of introspection into metals. You can use this to run the metals doctor, which does a self interrogation to make sure everything is set up just right. Very handy.

🔗 ALE

In theory ALE autoconfigures LSPs, but in practice nothing is perfect and I ended up needing to basically ignore all of the autoconfiguration for Metals. I opened an issue to resolve it in the future.

One option is to fix the metals integration for ALE. The way I did that was to replace the metals-vim string with just metals and to add build.gradle to the list of potential_roots:

--- a/ale_linters/scala/metals.vim
+++ b/ale_linters/scala/metals.vim
@@ -1,7 +1,7 @@
 " Author: Jeffrey Lau - https://github.com/zoonfafer
 " Description: Metals Language Server for Scala https://scalameta.org/metals/
 
-call ale#Set('scala_metals_executable', 'metals-vim')
+call ale#Set('scala_metals_executable', 'metals')
 call ale#Set('scala_metals_project_root', '')
 
 function! ale_linters#scala#metals#GetProjectRoot(buffer) abort
@@ -16,6 +16,7 @@ function! ale_linters#scala#metals#GetProjectRoot(buffer) abort
     \   'build.sbt',
     \   '.bloop',
     \   '.metals',
+    \   'build.gradle',
     \]
 
     for l:root in l:potential_roots

I am pretty sure that the use of ale#path#ResolveLocalPath is just wrong though.

I did this because at work I deal with a lot (like, dozens) of scala projects, so hardcoding a project_root is a nonstarter.

Another option, which is probably simpler all things considered, is to just configure the integration directly:

" if you do not call packloadall you'll get a weird error.
packloadall
call ale#linter#Define('scala', {
\   'name': 'frew_metals',
\   'lsp': 'stdio',
\   'executable': '/home/frew/bin/metals',
\   'command': '%e run',
\   'initialization_options': { 'rootPatterns': 'build.gradle', 'isHttpEnabled': 'true' },
\   'project_root': '/home/frew/code/zr0/datalake/onedaydataset',
\})

Annoyingly this means setting the project_root at linter definition time, which is a little silly, but such is life. Maybe worse, it seems like the rootPatterns above means metals can in theory find the root for you, but ALE errors if you don’t set project_root so setting that option with this plugin is basically a waste.

After the above I set up the autocompletion and goto definition like this:

set omnifunc=ale#completion#OmniFunc
nnoremap <silent> gd :ALEGoToDefinition<CR>

The first line wires ALE up with OmniComplete, which by default is triggered by <C-x><C-o>. Check the doc for ALE or use some other tool if you want the autocomplete running all the time. The second line makes gd Goto the Definition of a type, method, or whatever.

🔗 Debugging

One of the things I really like about ALE is that it has some facilities for debugging it. First and foremost, if you are having issues with ALE, run :ALEInfo. The first few lines will tell you what linters are set up currently, but make sure to scroll to the bottom where you can see what linters are actually running.

If you are trying to debug the project root detection, just running :echo ale_linters#scala#metals#GetProjectRoot('') is pretty useful.

And finally, as a nuclear option, this will give you a log you can tail of basically all of the back and forth between ALE and the LSP, and some other ALE debugging details:

call ch_logfile('/tmp/chlogfile.log', 'w') 

This is using builtin vim functionality that I was pleased to see. I have not checked to see how (if?) other plugins use this.

🔗 Not Perfect

There are some types that I cannot goto definition on. I don’t know if this is because Scala is complicated, I haven’t completely configured Metals, or there are bugs in Metals, or a combination of these. For example, I can jump into definitions of Spark internals, and I can jump to definitions of classes within my project, but I cannot jump to the base class one of my classes extends.


For the most part I’m pretty happy with the outcome here. I got to the point where I think configuring an LSP in the future wouldn’t be so intimidating, I learned the terms to search for in the future (initialization_options,) and I improved my configuration for working with Scala at work. There’s still plenty of room for improvement, of course.

(Affiliate links below.)

If you’d like to learn more about vim, I can recommend a few excellent books. I first learned how to use vi from Learning the vi and Vim Editors. The new edition has a lot more information and spends more time on Vim specific features. It was helpful for me at the time, and the fundamental model of vi is still well supported in Vim and this book explores that well.

Second, if you really want to up your editing game, check out Practical Vim. It’s a very approachable book that unpacks some of the lesser used features in ways that will be clearly and immediately useful. I periodically review this book because it’s such a treasure trove of clear hints and tips.


Thanks to my friend Jeff Rhyason for pointing out Bloop, totally separately from this project, and Michael McClimon for suggesting I check out ALE, and Chris Kipp for help setting up metals, and Kevin O’Neal for reading over this blog post.

🔗 Configuring Other Plugins

🔗 vim-lsp

For vim-lsp I basically gathered together information based on the above, the vim-lsp-settings plugin which intends to autoconfigure vim-lsp, and the wiki. The following worked for me:

au User lsp_setup call lsp#register_server({
   \ 'name': 'metals',
   \ 'cmd': ['metals'],
   \ 'initialization_options': { 'rootPatterns': 'build.gradle', 'isHttpEnabled': 'true' },
   \ 'allowlist': [ 'scala', 'sbt' ],
   \ })
nnoremap <silent> gd :LspDefinition<CR>
set omnifunc=lsp#complete

Conveniently we don’t have to specify the project root.

🔗 vim-lsc

This time I only had to read the official doc for vim-lsc (mostly lsc-server-customization) to get it all working:

let g:lsc_server_commands = {
    \ 'scala': {
    \    'command': 'metals',
    \    'workspace_config': {
    \        'rootPatterns': 'build.gradle',
    \        'isHttpEnabled': 'true',
    \    },
    \  },
    \}
nnoremap <silent> gd :LSClientGoToDefinition<CR>
set omnifunc=lsc#complete#complete
Posted Tue, May 31, 2022

If you're interested in being notified when new posts are published, you can subscribe here; you'll get an email once a week at the most.