<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: nlp</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/tags/nlp.atom" rel="self"/><id>http://simonwillison.net/</id><updated>2024-12-24T06:21:29+00:00</updated><author><name>Simon Willison</name></author><entry><title>Finally, a replacement for BERT: Introducing ModernBERT</title><link href="https://simonwillison.net/2024/Dec/24/modernbert/#atom-tag" rel="alternate"/><published>2024-12-24T06:21:29+00:00</published><updated>2024-12-24T06:21:29+00:00</updated><id>https://simonwillison.net/2024/Dec/24/modernbert/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.answer.ai/posts/2024-12-19-modernbert.html"&gt;Finally, a replacement for BERT: Introducing ModernBERT&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;a href="https://en.wikipedia.org/wiki/BERT_(language_model)"&gt;BERT&lt;/a&gt; was an early language model released by Google in October 2018. Unlike modern LLMs it wasn't designed for generating text. BERT was trained for masked token prediction and was generally applied to problems like Named Entity Recognition or Sentiment Analysis. BERT also wasn't very useful on its own - most applications required you to fine-tune a model on top of it.&lt;/p&gt;
&lt;p&gt;In exploring BERT I decided to try out &lt;a href="https://huggingface.co/dslim/distilbert-NER"&gt;dslim/distilbert-NER&lt;/a&gt;, a popular Named Entity Recognition model fine-tuned on top of DistilBERT (a smaller distilled version of the original BERT model). &lt;a href="https://til.simonwillison.net/llms/bert-ner"&gt;Here are my notes&lt;/a&gt; on running that using &lt;code&gt;uv run&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Jeremy Howard's &lt;a href="https://www.answer.ai/"&gt;Answer.AI&lt;/a&gt; research group, &lt;a href="https://www.lighton.ai/"&gt;LightOn&lt;/a&gt; and friends supported the development of ModernBERT, a brand new BERT-style model that applies many enhancements from the past six years of advances in this space.&lt;/p&gt;
&lt;p&gt;While BERT was trained on 3.3 billion tokens, producing 110 million and 340 million parameter models, ModernBERT trained on 2 trillion tokens, resulting in 140 million and 395 million parameter models. The parameter count hasn't increased much because it's designed to run on lower-end hardware. It has a 8192 token context length, a significant improvement on BERT's 512.&lt;/p&gt;
&lt;p&gt;I was able to run one of the demos from the announcement post using &lt;code&gt;uv run&lt;/code&gt; like this (I'm not sure why I had to use &lt;code&gt;numpy&amp;lt;2.0&lt;/code&gt; but without that I got an error about &lt;code&gt;cannot import name 'ComplexWarning' from 'numpy.core.numeric'&lt;/code&gt;):&lt;/p&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;uv run --with &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;numpy&amp;lt;2.0&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; --with torch --with &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;git+https://github.com/huggingface/transformers.git&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; python&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Then this Python:&lt;/p&gt;
&lt;pre&gt;&lt;span class="pl-k"&gt;import&lt;/span&gt; &lt;span class="pl-s1"&gt;torch&lt;/span&gt;
&lt;span class="pl-k"&gt;from&lt;/span&gt; &lt;span class="pl-s1"&gt;transformers&lt;/span&gt; &lt;span class="pl-k"&gt;import&lt;/span&gt; &lt;span class="pl-s1"&gt;pipeline&lt;/span&gt;
&lt;span class="pl-k"&gt;from&lt;/span&gt; &lt;span class="pl-s1"&gt;pprint&lt;/span&gt; &lt;span class="pl-k"&gt;import&lt;/span&gt; &lt;span class="pl-s1"&gt;pprint&lt;/span&gt;
&lt;span class="pl-s1"&gt;pipe&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-en"&gt;pipeline&lt;/span&gt;(
    &lt;span class="pl-s"&gt;"fill-mask"&lt;/span&gt;,
    &lt;span class="pl-s1"&gt;model&lt;/span&gt;&lt;span class="pl-c1"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;"answerdotai/ModernBERT-base"&lt;/span&gt;,
    &lt;span class="pl-s1"&gt;torch_dtype&lt;/span&gt;&lt;span class="pl-c1"&gt;=&lt;/span&gt;&lt;span class="pl-s1"&gt;torch&lt;/span&gt;.&lt;span class="pl-c1"&gt;bfloat16&lt;/span&gt;,
)
&lt;span class="pl-s1"&gt;input_text&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s"&gt;"He walked to the [MASK]."&lt;/span&gt;
&lt;span class="pl-s1"&gt;results&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-en"&gt;pipe&lt;/span&gt;(&lt;span class="pl-s1"&gt;input_text&lt;/span&gt;)
&lt;span class="pl-en"&gt;pprint&lt;/span&gt;(&lt;span class="pl-s1"&gt;results&lt;/span&gt;)&lt;/pre&gt;
&lt;p&gt;Which downloaded 573MB to &lt;code&gt;~/.cache/huggingface/hub/models--answerdotai--ModernBERT-base&lt;/code&gt; and output:&lt;/p&gt;
&lt;pre&gt;[{&lt;span class="pl-s"&gt;'score'&lt;/span&gt;: &lt;span class="pl-c1"&gt;0.11669921875&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'sequence'&lt;/span&gt;: &lt;span class="pl-s"&gt;'He walked to the door.'&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'token'&lt;/span&gt;: &lt;span class="pl-c1"&gt;3369&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'token_str'&lt;/span&gt;: &lt;span class="pl-s"&gt;' door'&lt;/span&gt;},
 {&lt;span class="pl-s"&gt;'score'&lt;/span&gt;: &lt;span class="pl-c1"&gt;0.037841796875&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'sequence'&lt;/span&gt;: &lt;span class="pl-s"&gt;'He walked to the office.'&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'token'&lt;/span&gt;: &lt;span class="pl-c1"&gt;3906&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'token_str'&lt;/span&gt;: &lt;span class="pl-s"&gt;' office'&lt;/span&gt;},
 {&lt;span class="pl-s"&gt;'score'&lt;/span&gt;: &lt;span class="pl-c1"&gt;0.0277099609375&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'sequence'&lt;/span&gt;: &lt;span class="pl-s"&gt;'He walked to the library.'&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'token'&lt;/span&gt;: &lt;span class="pl-c1"&gt;6335&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'token_str'&lt;/span&gt;: &lt;span class="pl-s"&gt;' library'&lt;/span&gt;},
 {&lt;span class="pl-s"&gt;'score'&lt;/span&gt;: &lt;span class="pl-c1"&gt;0.0216064453125&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'sequence'&lt;/span&gt;: &lt;span class="pl-s"&gt;'He walked to the gate.'&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'token'&lt;/span&gt;: &lt;span class="pl-c1"&gt;7394&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'token_str'&lt;/span&gt;: &lt;span class="pl-s"&gt;' gate'&lt;/span&gt;},
 {&lt;span class="pl-s"&gt;'score'&lt;/span&gt;: &lt;span class="pl-c1"&gt;0.020263671875&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'sequence'&lt;/span&gt;: &lt;span class="pl-s"&gt;'He walked to the window.'&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'token'&lt;/span&gt;: &lt;span class="pl-c1"&gt;3497&lt;/span&gt;,
  &lt;span class="pl-s"&gt;'token_str'&lt;/span&gt;: &lt;span class="pl-s"&gt;' window'&lt;/span&gt;}]&lt;/pre&gt;

&lt;p&gt;I'm looking forward to trying out models that use ModernBERT as their base. The model release is accompanied by a paper (&lt;a href="https://arxiv.org/abs/2412.13663"&gt;Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference&lt;/a&gt;) and &lt;a href="https://huggingface.co/docs/transformers/main/en/model_doc/modernbert"&gt;new documentation&lt;/a&gt; for using it with the Transformers library.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://bsky.app/profile/benjaminwarner.dev/post/3ldur45oz322b"&gt;@benjaminwarner.dev&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/bert"&gt;bert&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nlp"&gt;nlp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/transformers"&gt;transformers&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jeremy-howard"&gt;jeremy-howard&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/hugging-face"&gt;hugging-face&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/uv"&gt;uv&lt;/a&gt;&lt;/p&gt;



</summary><category term="bert"/><category term="nlp"/><category term="python"/><category term="transformers"/><category term="ai"/><category term="jeremy-howard"/><category term="hugging-face"/><category term="uv"/></entry><entry><title>Matthew Honnibal from spaCy on why LLMs have not solved NLP</title><link href="https://simonwillison.net/2023/Sep/9/matthew-honnibal-llms/#atom-tag" rel="alternate"/><published>2023-09-09T21:30:18+00:00</published><updated>2023-09-09T21:30:18+00:00</updated><id>https://simonwillison.net/2023/Sep/9/matthew-honnibal-llms/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://news.ycombinator.com/item?id=37442574#37443921"&gt;Matthew Honnibal from spaCy on why LLMs have not solved NLP&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
A common trope these days is that the entire field of NLP has been effectively solved by Large Language Models. Here’s a lengthy comment from Matthew Honnibal, creator of the highly regarded spaCy Python NLP library, explaining in detail why that argument doesn’t hold up.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/nlp"&gt;nlp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;



</summary><category term="nlp"/><category term="ai"/><category term="generative-ai"/><category term="llms"/></entry><entry><title>Closed AI Models Make Bad Baselines</title><link href="https://simonwillison.net/2023/Apr/3/closed-ai-models-make-bad-baselines/#atom-tag" rel="alternate"/><published>2023-04-03T19:57:09+00:00</published><updated>2023-04-03T19:57:09+00:00</updated><id>https://simonwillison.net/2023/Apr/3/closed-ai-models-make-bad-baselines/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://hackingsemantics.xyz/2023/closed-baselines/"&gt;Closed AI Models Make Bad Baselines&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
The NLP academic research community are facing a tough challenge: the state-of-the-art in large language models, GPT-4, is entirely closed which means papers that compare it to other models lack replicability and credibility. “We make the case that as far as research and scientific publications are concerned, the “closed” models (as defined below) cannot be meaningfully studied, and they should not become a “universal baseline”, the way BERT was for some time widely considered to be.”&lt;/p&gt;

&lt;p&gt;Anna Rogers proposes a new rule for this kind of research: “That which is not open and reasonably reproducible cannot be considered a requisite baseline.”

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://twitter.com/emilymbender/status/1642975520840384514"&gt;@emilymbender&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/nlp"&gt;nlp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openai"&gt;openai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gpt-4"&gt;gpt-4&lt;/a&gt;&lt;/p&gt;



</summary><category term="nlp"/><category term="ai"/><category term="openai"/><category term="generative-ai"/><category term="gpt-4"/></entry><entry><title>Quoting Jeonghwan Kim</title><link href="https://simonwillison.net/2023/Mar/16/jeonghwan-kim/#atom-tag" rel="alternate"/><published>2023-03-16T05:39:58+00:00</published><updated>2023-03-16T05:39:58+00:00</updated><id>https://simonwillison.net/2023/Mar/16/jeonghwan-kim/#atom-tag</id><summary type="html">
    &lt;blockquote cite="https://twitter.com/masterjeongk/status/1635967360866877442"&gt;&lt;p&gt;As an NLP researcher I'm kind of worried about this field after 10-20 years. Feels like these oversized LLMs are going to eat up this field and I'm sitting in my chair thinking, "What's the point of my research when GPT-4 can do it better?"&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://twitter.com/masterjeongk/status/1635967360866877442"&gt;Jeonghwan Kim&lt;/a&gt;&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/machine-learning"&gt;machine-learning&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nlp"&gt;nlp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gpt-4"&gt;gpt-4&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;



</summary><category term="machine-learning"/><category term="nlp"/><category term="ai"/><category term="generative-ai"/><category term="gpt-4"/><category term="llms"/></entry><entry><title>Statistical NLP on OpenStreetMap</title><link href="https://simonwillison.net/2018/Jan/8/statistical-nlp-openstreetmap/#atom-tag" rel="alternate"/><published>2018-01-08T19:33:23+00:00</published><updated>2018-01-08T19:33:23+00:00</updated><id>https://simonwillison.net/2018/Jan/8/statistical-nlp-openstreetmap/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://machinelearnings.co/statistical-nlp-on-openstreetmap-b9d573e6cc86"&gt;Statistical NLP on OpenStreetMap&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
libpostal is ferociously clever: it’s a library for parsing and understanding worldwide addresses, built on top of a machine learning model trained on millions of addresses from OpenStreetMap. Al Barrentine describes how it works in this fascinating and detailed essay.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/machine-learning"&gt;machine-learning&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nlp"&gt;nlp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/openstreetmap"&gt;openstreetmap&lt;/a&gt;&lt;/p&gt;



</summary><category term="machine-learning"/><category term="nlp"/><category term="openstreetmap"/></entry><entry><title>spaCy</title><link href="https://simonwillison.net/2017/Nov/8/spacy/#atom-tag" rel="alternate"/><published>2017-11-08T16:43:05+00:00</published><updated>2017-11-08T16:43:05+00:00</updated><id>https://simonwillison.net/2017/Nov/8/spacy/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://spacy.io/"&gt;spaCy&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
“Industrial-strength Natural Language Processing in Python”. Exciting alternative to nltk—spaCy is mostly written in Cython, makes bold performance claims and ships with a range of pre-built statistical models covering multiple different languages. The API design is clean and intuitive and spaCy even includes an SVG visualizer that works with Jupyter.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/nlp"&gt;nlp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/spacy"&gt;spacy&lt;/a&gt;&lt;/p&gt;



</summary><category term="nlp"/><category term="python"/><category term="spacy"/></entry><entry><title>Oxford Deep NLP 2017 course</title><link href="https://simonwillison.net/2017/Oct/31/oxford-cs-deepnlp/#atom-tag" rel="alternate"/><published>2017-10-31T20:39:17+00:00</published><updated>2017-10-31T20:39:17+00:00</updated><id>https://simonwillison.net/2017/Oct/31/oxford-cs-deepnlp/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/oxford-cs-deepnlp-2017/lectures"&gt;Oxford Deep NLP 2017 course&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Slides, course description and links to lecture videos for the 2017 Deep Natural Language Processing course at the University of Oxford presented by a team from Google DeepMind.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://news.ycombinator.com/item?id=15593408"&gt;Hacker News&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/machine-learning"&gt;machine-learning&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nlp"&gt;nlp&lt;/a&gt;&lt;/p&gt;



</summary><category term="google"/><category term="machine-learning"/><category term="nlp"/></entry><entry><title>Which investors would consider a natural language processing startup in London?</title><link href="https://simonwillison.net/2013/Sep/30/which-investors-would-consider/#atom-tag" rel="alternate"/><published>2013-09-30T15:52:00+00:00</published><updated>2013-09-30T15:52:00+00:00</updated><id>https://simonwillison.net/2013/Sep/30/which-investors-would-consider/#atom-tag</id><summary type="html">
    &lt;p&gt;&lt;em&gt;My answer to &lt;a href="https://www.quora.com/Which-investors-would-consider-a-natural-language-processing-startup-in-London/answer/Simon-Willison"&gt;Which investors would consider a natural language processing startup in London?&lt;/a&gt; on Quora&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I don't know the answer, but I know how you can find it: track down as many London-based AI/machine learning/NLP startups as you can and look at who their investors are.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/london"&gt;london&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nlp"&gt;nlp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/startups"&gt;startups&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/quora"&gt;quora&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="london"/><category term="nlp"/><category term="startups"/><category term="quora"/></entry><entry><title>topia.termextract</title><link href="https://simonwillison.net/2009/Aug/10/python/#atom-tag" rel="alternate"/><published>2009-08-10T21:26:02+00:00</published><updated>2009-08-10T21:26:02+00:00</updated><id>https://simonwillison.net/2009/Aug/10/python/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://pypi.python.org/pypi/topia.termextract/"&gt;topia.termextract&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Impressive Python term extraction library (similar to the various term extraction web APIs but you can run it on your own hardware), incorporating a Parts-Of-Speech tagging algorithm.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/nlp"&gt;nlp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/termextraction"&gt;termextraction&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/topia"&gt;topia&lt;/a&gt;&lt;/p&gt;



</summary><category term="nlp"/><category term="python"/><category term="termextraction"/><category term="topia"/></entry><entry><title>JS-Placemaker - geolocate texts in JavaScript</title><link href="https://simonwillison.net/2009/May/23/jsplacemaker/#atom-tag" rel="alternate"/><published>2009-05-23T00:36:38+00:00</published><updated>2009-05-23T00:36:38+00:00</updated><id>https://simonwillison.net/2009/May/23/jsplacemaker/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://icant.co.uk/jsplacemaker/"&gt;JS-Placemaker - geolocate texts in JavaScript&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Chris Heilmann exposed Placemaker to JavaScript (JSONP) using a YQL execute table. Try his examples—I’m impressed that “My name is Jack London, I live in Ontario” returns just Ontario, demonstrating that Placemaker’s NLP is pretty well tuned.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/christian-heilmann"&gt;christian-heilmann&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/geocoding"&gt;geocoding&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/geospatial"&gt;geospatial&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/jsonp"&gt;jsonp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nlp"&gt;nlp&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/placemaker"&gt;placemaker&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/yahoo"&gt;yahoo&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/yql"&gt;yql&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/yqlexecute"&gt;yqlexecute&lt;/a&gt;&lt;/p&gt;



</summary><category term="christian-heilmann"/><category term="geocoding"/><category term="geospatial"/><category term="javascript"/><category term="jsonp"/><category term="nlp"/><category term="placemaker"/><category term="yahoo"/><category term="yql"/><category term="yqlexecute"/></entry></feed>