Information Week reports that Google has started testing ways to index data from the invisible Web, including "Web pages generated dynamically from a database, based on input such as might be provided through a Web submission form." (For more on the invisible Web, see my Wisconsin Lawyer article, Searching Smarter)
Given that the invisible Web, also know as the deep Web or hidden Web, is approximately 400 to 550 times larger than the visible Web, that could amount to a lot more data accessible via Google.
Over at Search Engine Land, Danny Sullivan points out that "Google's not the first to do something like this. Companies like Quigo, BrightPlanet, and WhizBang Labs were doing this type of work years ago. But it never translated over to the major search engines. Now chapter two of surfacing deep web material is opening, this time with a major search player -- in that, Google is being a pioneer."
Hat tip to WisLaw for this post.