<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog: google-docs</title><link href="http://simonwillison.net/" rel="alternate"/><link href="http://simonwillison.net/tags/google-docs.atom" rel="self"/><id>http://simonwillison.net/</id><updated>2022-02-20T22:47:01+00:00</updated><author><name>Simon Willison</name></author><entry><title>Google Drive to SQLite</title><link href="https://simonwillison.net/2022/Feb/20/google-drive-to-sqlite/#atom-tag" rel="alternate"/><published>2022-02-20T22:47:01+00:00</published><updated>2022-02-20T22:47:01+00:00</updated><id>https://simonwillison.net/2022/Feb/20/google-drive-to-sqlite/#atom-tag</id><summary type="html">
    &lt;p&gt;I released a new tool this week: &lt;a href="https://datasette.io/tools/google-drive-to-sqlite"&gt;google-drive-to-sqlite&lt;/a&gt;. It's a CLI utility for fetching metadata about files in your &lt;a href="https://drive.google.com/"&gt;Google Drive&lt;/a&gt; and writing them to a local SQLite database.&lt;/p&gt;
&lt;p&gt;It's pretty fun!&lt;/p&gt;
&lt;p&gt;Here's how to create a SQLite database of every file you've started in your Google Drive, including both files created in Google Docs/Sheets and files you've uploaded to your drive:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;% pip install google-drive-to-sqlite
% google-drive-to-sqlite auth
Visit the following URL to authenticate with Google Drive

https://accounts.google.com/o/oauth2/v2/auth?access_type=offline&amp;amp;...

Then return here and paste in the resulting code:
Paste code here: 
# Authentication is now complete, so run:
% google-drive-to-sqlite files starred.db --starred
% ls -lah starred.db
-rw-r--r--@ 1 simon  staff    40K Feb 20 14:14 starred.db
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The OAuth client ID it is using hasn't been verified by Google yet, which I think means that only the first 100 people to use it will be able to authenticate. If you need to you can work around that by creating your own client ID, as &lt;a href="https://datasette.io/tools/google-drive-to-sqlite#user-content-authentication"&gt;described in the README&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Having created that &lt;code&gt;starred.db&lt;/code&gt; file you can explore the resulting database using &lt;a href="https://datasette.io/"&gt;Datasette&lt;/a&gt; or &lt;a href="https://datasette.io/desktop"&gt;Datasette Desktop&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;datasette starred.db

# or if you have the Datasette Desktop macOS app installed:
open starred.db
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here's Datasette running against one of my larger metadata collections:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://static.simonwillison.net/static/2022/google-drive-to-sqlite.png" alt="Screenshot showing the drive_files, drive_folders and drive_users tables" style="max-width:100%;" /&gt;&lt;/p&gt;
&lt;h4&gt;Why build this?&lt;/h4&gt;
&lt;p&gt;I recently got involved with a participatory journalism project, where a team of reporters have used FOIA requests to gather a huge corpus of thousands of files. The files are in a complex folder hierarchy a Google Drive. I wanted to start getting a feel for what's in there.&lt;/p&gt;
&lt;p&gt;Pulling the metadata - file names, sizes, file types, file owners, creation dates - into a SQLite database felt like a great way to start understanding the size and scope of what had been collected so far.&lt;/p&gt;
&lt;p&gt;Outside of that project, there's something very exciting to me about being able to use Google Drive to collate all kinds of different data and then tie it into the larger Datasette and &lt;a href="https://dogsheep.github.io/"&gt;Dogsheep&lt;/a&gt; ecosystems. I think there's a lot of potential here for all kinds of interesting projects.&lt;/p&gt;
&lt;h4&gt;How it works&lt;/h4&gt;
&lt;p&gt;The tool is written in Python using &lt;a href="https://click.palletsprojects.com/"&gt;Click&lt;/a&gt; (based on my &lt;a href="https://github.com/simonw/click-app"&gt;click-app template&lt;/a&gt;) and &lt;a href="https://sqlite-utils.datasette.io/"&gt;sqlite-utils&lt;/a&gt;. It works by calling the &lt;a href="https://developers.google.com/drive/api"&gt;Google Drive API&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;auth&lt;/code&gt; command needs to get hold of an OAuth access token scoped to make read-only calls to the user's Google Drive contents.&lt;/p&gt;
&lt;p&gt;This took a bit of figuring out. I wrote up what I learned in this TIL: &lt;a href="https://til.simonwillison.net/googlecloud/google-oauth-cli-application"&gt;Google OAuth for a CLI application&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Notably, the end result of that flow is a JSON response containing both an &lt;code&gt;access_token&lt;/code&gt; and a &lt;code&gt;refresh_token&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The access token can be used to make authenticated API calls, but it expires after an hour and that expiration cannot be extended.&lt;/p&gt;
&lt;p&gt;The refresh token lasts forever, and can be used at any time to obtain a fresh access token.&lt;/p&gt;
&lt;p&gt;So the &lt;code&gt;auth&lt;/code&gt; command writes the refresh token to a file called &lt;code&gt;auth.json&lt;/code&gt;, then future calls to other commands use that token to retrieve a fresh access token on every run.&lt;/p&gt;
&lt;p&gt;The most useful command is &lt;a href="https://datasette.io/tools/google-drive-to-sqlite#user-content-google-drive-to-sqlite-files"&gt;google-drive-to-sqlite files&lt;/a&gt;, which retrieves file metadata based on various criteria, then either writes that to a SQLite database or dumps it out as JSON or newline-delimited JSON. It does this by paginating through results from the Google Drive &lt;a href="https://developers.google.com/drive/api/v3/reference/files/list"&gt;files list API&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;files --folder ID&lt;/code&gt; option is a special case. It retrieves every nested file and subfolder starting at the specified folder. The Google Drive API doesn't support this operation directly, so the tool instead has to recursively call directory listings on every folder until it has pulled back all of the data. See my TIL &lt;a href="https://til.simonwillison.net/googlecloud/recursive-fetch-google-drive"&gt;Recursively fetching metadata for all files in a Google Drive folder&lt;/a&gt; for more details.&lt;/p&gt;
&lt;p&gt;This operation took over an hour for the largest folder I tested it against! So long that the access token it was using expired and I had to &lt;a href="https://github.com/simonw/google-drive-to-sqlite/issues/11"&gt;implement code&lt;/a&gt; to refresh the token in the middle of the operation.&lt;/p&gt;
&lt;h4&gt;Some other neat tricks&lt;/h4&gt;
&lt;p&gt;The &lt;a href="https://datasette.io/tools/google-drive-to-sqlite#user-content-google-drive-to-sqlite-download-file_id"&gt;download command&lt;/a&gt; downloads the specified file to disk:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;google-drive-to-sqlite download \
  0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It detects the file type and uses that as the extension - in the above example, it saves the file as &lt;code&gt;0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB.pdf&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://datasette.io/tools/google-drive-to-sqlite#user-content-google-drive-to-sqlite-export-format-file_id"&gt;export command&lt;/a&gt; only works against the file IDs for docs, sheets and presentations create using Google Apps. It can export to a variety of different formats:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;google-drive-to-sqlite export html \
  10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This writes to &lt;code&gt;10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU-export.html&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://datasette.io/tools/google-drive-to-sqlite#user-content-google-drive-to-sqlite-get-url"&gt;get command&lt;/a&gt; takes a URL to a Google Drive API endpoint and fetches it using a valid access token. This is a great tool for debugging and API exploration - my &lt;code&gt;github-to-sqlite&lt;/code&gt; tool &lt;a href="https://datasette.io/tools/github-to-sqlite#user-content-making-authenticated-api-calls"&gt;has this too&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;google-drive-to-sqlite get 'https://www.googleapis.com/drive/v3/about?fields=*'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It also knows how to paginate! Adding &lt;code&gt;--paginate files&lt;/code&gt; will cause it to fetch all of the subsequent pages of the API and return just the items from the &lt;code&gt;"files"&lt;/code&gt; key combined into a single JSON array, for example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;google-drive-to-sqlite get \
  https://www.googleapis.com/drive/v3/files \
  --paginate files
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Exploring other APIs with the same tools&lt;/h4&gt;
&lt;p&gt;While I was building this, I realized that with just a little extra work the auth and get commands could be used to explore other Google APIs too.&lt;/p&gt;
&lt;p&gt;If you are a developer, you can create your own OAuth credentials and enable access to other APIs using &lt;a href="https://console.cloud.google.com/apis/credentials"&gt;the Google Cloud console&lt;/a&gt;. You can then take the resulting client ID and secret, pick a scope and run the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;google-drive-to-sqlite auth -a calendar-auth.json \
  --scope 'https://www.googleapis.com/auth/calendar.readonly' \
  --google-client-id '184325416553-nu5ci563v36rmj9opdl7mah786anbkrq.apps.googleusercontent.com' \
  --google-client-secret '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'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;calendar-auth.json&lt;/code&gt; will now be a JSON file that looks something like this:&lt;/p&gt;
&lt;div class="highlight highlight-source-json"&gt;&lt;pre&gt;{
  &lt;span class="pl-ent"&gt;"google-drive-to-sqlite"&lt;/span&gt;: {
    &lt;span class="pl-ent"&gt;"refresh_token"&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;1//...&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
    &lt;span class="pl-ent"&gt;"google_client_id"&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;184325416553-nu5ci563v36rmj9opdl7mah786anbkrq.apps.googleusercontent.com&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
    &lt;span class="pl-ent"&gt;"google_client_secret"&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;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&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
    &lt;span class="pl-ent"&gt;"scope"&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;https://www.googleapis.com/auth/calendar.readonly&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;
  }
}&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;You can now fetch your Google Calendar items by adding your email address to the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;google-drive-to-sqlite get \
  https://www.googleapis.com/calendar/v3/calendars/...@gmail.com/events \
  --auth calendar-auth.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will output JSON to the console. For newline-delimited JSON, add &lt;code&gt;--nl&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Since we can paginate with &lt;code&gt;--paginate items&lt;/code&gt;, this means we can pipe the results to &lt;a href="https://sqlite-utils.datasette.io/en/stable/cli-reference.html#insert"&gt;sqlite-utils insert&lt;/a&gt; and create a SQLite database of our calendar items!&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;google-drive-to-sqlite get \
  https://www.googleapis.com/calendar/v3/calendars/...@gmail.com/events \
  --auth calendar-auth.json \
  --paginate items --nl \
  | sqlite-utils insert calendar.db events \
    - --pk id --nl --alter --replace
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Maybe &lt;code&gt;google-drive-to-sqlite&lt;/code&gt; wasn't the right name for this after all!&lt;/p&gt;
&lt;h4&gt;What's next?&lt;/h4&gt;
&lt;p&gt;Google severely &lt;a href="https://cloud.google.com/blog/products/identity-security/enhancing-security-controls-for-google-drive-third-party-apps"&gt;tightened their policies&lt;/a&gt; on apps that can access Google Drive a few years ago. I'm currently waiting to see if my app will make it through their verification process, see &lt;a href="https://github.com/simonw/google-drive-to-sqlite/issues/15"&gt;issue #15&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If it doesn't the tool will still be usable, but users will have to jump through some extra hoops to set up their own client ID. I don't see this as a huge concern.&lt;/p&gt;
&lt;p&gt;I've started thinking about ways to import additional data from the Google Drive APIs. I'm particularly interested in the idea of creating a full-text search index in SQLite based on plain text exports of documents created in Google Docs, see &lt;a href="https://github.com/simonw/google-drive-to-sqlite/issues/28"&gt;issue #28&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For other short-term future plans, take a look at the project's &lt;a href="https://github.com/simonw/google-drive-to-sqlite/issues"&gt;open issues&lt;/a&gt;.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/google-docs"&gt;google-docs&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/projects"&gt;projects&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/dogsheep"&gt;dogsheep&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/weeknotes"&gt;weeknotes&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sqlite-utils"&gt;sqlite-utils&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="google-docs"/><category term="projects"/><category term="datasette"/><category term="dogsheep"/><category term="weeknotes"/><category term="sqlite-utils"/></entry><entry><title>Introducing Closure Tools</title><link href="https://simonwillison.net/2009/Nov/6/closure/#atom-tag" rel="alternate"/><published>2009-11-06T07:33:56+00:00</published><updated>2009-11-06T07:33:56+00:00</updated><id>https://simonwillison.net/2009/Nov/6/closure/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://googlecode.blogspot.com/2009/11/introducing-closure-tools.html"&gt;Introducing Closure Tools&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Google have released the pure-JavaScript library, apparently used for Gmail, Google Docs and Google Maps. It comes with a powerful JavaScript optimiser tool with linting built in and an accompanying Firebug extension to ensure the obfuscated code it produces can still be debugged. There’s also a template system which precompiles down to JavaScript and can also be called from Java.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/closure"&gt;closure&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/firebug"&gt;firebug&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gmail"&gt;gmail&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google-docs"&gt;google-docs&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/libraries"&gt;libraries&lt;/a&gt;&lt;/p&gt;



</summary><category term="closure"/><category term="firebug"/><category term="gmail"/><category term="google"/><category term="google-docs"/><category term="javascript"/><category term="libraries"/></entry><entry><title>Write to a Google Spreadsheet from a Python script</title><link href="https://simonwillison.net/2009/Feb/16/write/#atom-tag" rel="alternate"/><published>2009-02-16T21:02:06+00:00</published><updated>2009-02-16T21:02:06+00:00</updated><id>https://simonwillison.net/2009/Feb/16/write/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://www.mattcutts.com/blog/write-google-spreadsheet-from-python/"&gt;Write to a Google Spreadsheet from a Python script&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
I didn’t know Google Spreadsheets could directly serve dynamic images that automatically update when the underlying data changes.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google-docs"&gt;google-docs&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/googlespreadsheets"&gt;googlespreadsheets&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;&lt;/p&gt;



</summary><category term="google"/><category term="google-docs"/><category term="googlespreadsheets"/><category term="python"/></entry><entry><title>US economic data spreadsheets from the Guardian</title><link href="https://simonwillison.net/2009/Jan/16/datawonks/#atom-tag" rel="alternate"/><published>2009-01-16T18:17:34+00:00</published><updated>2009-01-16T18:17:34+00:00</updated><id>https://simonwillison.net/2009/Jan/16/datawonks/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://www.guardian.co.uk/help/insideguardian/2009/jan/15/unitedstates-data-journalism-google-spreadsheets"&gt;US economic data spreadsheets from the Guardian&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
At the Guardian we’ve just released a bunch of economic data about the US painstakingly collected by Simon Rogers, our top data journalist, as Google Docs spreadsheets. Get your data here.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/data"&gt;data&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/economics"&gt;economics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google-docs"&gt;google-docs&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/simon-rogers"&gt;simon-rogers&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/spreadsheets"&gt;spreadsheets&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/the-guardian"&gt;the-guardian&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/usa"&gt;usa&lt;/a&gt;&lt;/p&gt;



</summary><category term="data"/><category term="economics"/><category term="google-docs"/><category term="simon-rogers"/><category term="spreadsheets"/><category term="the-guardian"/><category term="usa"/></entry><entry><title>Data Scraping Wikipedia with Google Spreadsheets</title><link href="https://simonwillison.net/2008/Oct/16/data/#atom-tag" rel="alternate"/><published>2008-10-16T14:37:33+00:00</published><updated>2008-10-16T14:37:33+00:00</updated><id>https://simonwillison.net/2008/Oct/16/data/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://ouseful.wordpress.com/2008/10/14/data-scraping-wikipedia-with-google-spreadsheets/"&gt;Data Scraping Wikipedia with Google Spreadsheets&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
I hadn’t played with =importHTML in Google spreadsheets, which lets you suck in data from an HTML table or list somewhere on the web. This tutorial takes it further, bringing Wikipedia, Yahoo! Pipes and KML in to the mix.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/google-docs"&gt;google-docs&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/googlespreadsheet"&gt;googlespreadsheet&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/importhtml"&gt;importhtml&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/kml"&gt;kml&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/mashups"&gt;mashups&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/scraping"&gt;scraping&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/wikipedia"&gt;wikipedia&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/yahoo-pipes"&gt;yahoo-pipes&lt;/a&gt;&lt;/p&gt;



</summary><category term="google-docs"/><category term="googlespreadsheet"/><category term="importhtml"/><category term="kml"/><category term="mashups"/><category term="scraping"/><category term="wikipedia"/><category term="yahoo-pipes"/></entry><entry><title>Quoting Danny O'Brien</title><link href="https://simonwillison.net/2008/Jul/20/danny/#atom-tag" rel="alternate"/><published>2008-07-20T09:00:03+00:00</published><updated>2008-07-20T09:00:03+00:00</updated><id>https://simonwillison.net/2008/Jul/20/danny/#atom-tag</id><summary type="html">
    &lt;blockquote cite="http://www.oblomovka.com/entries/2008/07/16#1216246380"&gt;&lt;p&gt;If we want people to have the same degree of user autonomy as we've come to expect from the world, we may have to sit down and code alternatives to Google Docs, Twitter, and EC2 that can live with us on the edge, not be run by third parties.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="http://www.oblomovka.com/entries/2008/07/16#1216246380"&gt;Danny O&amp;#x27;Brien&lt;/a&gt;&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/danny-obrien"&gt;danny-obrien&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/decentralisation"&gt;decentralisation&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ec2"&gt;ec2&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google-docs"&gt;google-docs&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/twitter"&gt;twitter&lt;/a&gt;&lt;/p&gt;



</summary><category term="danny-obrien"/><category term="decentralisation"/><category term="ec2"/><category term="google-docs"/><category term="twitter"/></entry><entry><title>Google apps for your newsroom</title><link href="https://simonwillison.net/2008/Jan/7/newsroom/#atom-tag" rel="alternate"/><published>2008-01-07T21:24:05+00:00</published><updated>2008-01-07T21:24:05+00:00</updated><id>https://simonwillison.net/2008/Jan/7/newsroom/#atom-tag</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="http://www.postneo.com/2008/01/07/google-apps-for-your-newsroom"&gt;Google apps for your newsroom&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
How the LJ World team use online tools like Google Spreadsheet, Swivel, ManyEyes and Google MyMaps to collaborate with the newsroom and build data-heavy applications even faster.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/collaboration"&gt;collaboration&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/data-journalism"&gt;data-journalism&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/django"&gt;django&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google-calendar"&gt;google-calendar&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google-docs"&gt;google-docs&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/google-maps"&gt;google-maps&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/journalism"&gt;journalism&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ljworld"&gt;ljworld&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/manyeyes"&gt;manyeyes&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/matt-croydon"&gt;matt-croydon&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/mymaps"&gt;mymaps&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/news"&gt;news&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/newsroom"&gt;newsroom&lt;/a&gt;&lt;/p&gt;



</summary><category term="collaboration"/><category term="data-journalism"/><category term="django"/><category term="google"/><category term="google-calendar"/><category term="google-docs"/><category term="google-maps"/><category term="journalism"/><category term="ljworld"/><category term="manyeyes"/><category term="matt-croydon"/><category term="mymaps"/><category term="news"/><category term="newsroom"/></entry></feed>