Popular recipes tagged "web" but not "http"http://code.activestate.com/recipes/tags/web-http/2017-01-05T16:57:15-08:00ActiveState Code RecipesGive Python code a web plus command-line interface with hug (Python) 2017-01-05T16:57:15-08:00Vasudev Ramhttp://code.activestate.com/recipes/users/4173351/http://code.activestate.com/recipes/580742-give-python-code-a-web-plus-command-line-interface/ <p style="color: grey"> Python recipe 580742 by <a href="/recipes/users/4173351/">Vasudev Ram</a> (<a href="/recipes/tags/cli/">cli</a>, <a href="/recipes/tags/commandline/">commandline</a>, <a href="/recipes/tags/hug/">hug</a>, <a href="/recipes/tags/library/">library</a>, <a href="/recipes/tags/python/">python</a>, <a href="/recipes/tags/python3/">python3</a>, <a href="/recipes/tags/user_interface/">user_interface</a>, <a href="/recipes/tags/web/">web</a>, <a href="/recipes/tags/web_server/">web_server</a>). </p> <p>This recipe shows how to take a Python function and wrap it with both a web and a command-line interface, somewhat easily, using the hug Python library. The example used shows how to wrap a function that uses the psutil library to get information on disk partitions. So you can see the disk partition info either via the web browser or the command line. The code for the recipe is shown below. It is also possible to wrap multiple functions in the same Python file, and expose all of them via both the web and the command-line.</p> <p>More information and multiple sample outputs are available here:</p> <p><a href="https://jugad2.blogspot.in/2017/01/give-your-python-function-webcli-hug.html" rel="nofollow">https://jugad2.blogspot.in/2017/01/give-your-python-function-webcli-hug.html</a></p> Search for oranges with the wikipedia Python library (Python) 2015-11-03T18:52:55-08:00Vasudev Ramhttp://code.activestate.com/recipes/users/4173351/http://code.activestate.com/recipes/579121-search-for-oranges-with-the-wikipedia-python-libra/ <p style="color: grey"> Python recipe 579121 by <a href="/recipes/users/4173351/">Vasudev Ram</a> (<a href="/recipes/tags/api/">api</a>, <a href="/recipes/tags/library/">library</a>, <a href="/recipes/tags/python/">python</a>, <a href="/recipes/tags/retrieving/">retrieving</a>, <a href="/recipes/tags/web/">web</a>, <a href="/recipes/tags/wikipedia/">wikipedia</a>). </p> <p>The wikipedia Python library (available on PyPI) is a wrapper for the official Wikipedia API. The library is higher level and easier to use than the API, though for limited functionality of the API. It can be used to easily do basic access of Wikipedia pages, which could be useful for many educational, reference and other purposes. This recipe shows the basic use of the wikipedia library, by using it to search for information about oranges.</p> Simple Web socket client implementation using Tornado framework. (Python) 2015-06-30T03:37:19-07:00Vovanhttp://code.activestate.com/recipes/users/4192447/http://code.activestate.com/recipes/579076-simple-web-socket-client-implementation-using-torn/ <p style="color: grey"> Python recipe 579076 by <a href="/recipes/users/4192447/">Vovan</a> (<a href="/recipes/tags/client/">client</a>, <a href="/recipes/tags/tornado/">tornado</a>, <a href="/recipes/tags/web/">web</a>, <a href="/recipes/tags/websocket/">websocket</a>, <a href="/recipes/tags/websockets/">websockets</a>). </p> <p>Simple Web socket client implementation using Tornado framework.</p> A script to automate installing MTS Mblaze UI in linux (Bash) 2015-07-29T18:26:59-07:00Emil george jameshttp://code.activestate.com/recipes/users/4191910/http://code.activestate.com/recipes/579039-a-script-to-automate-installing-mts-mblaze-ui-in-l/ <p style="color: grey"> Bash recipe 579039 by <a href="/recipes/users/4191910/">Emil george james</a> (<a href="/recipes/tags/internet/">internet</a>, <a href="/recipes/tags/linux/">linux</a>, <a href="/recipes/tags/script/">script</a>, <a href="/recipes/tags/shell/">shell</a>, <a href="/recipes/tags/web/">web</a>). </p> <p>An automate shell linux script to install mts mblaze ui application in all linux distros.this shell script automatically install the mts mblaze ui in your linux systems .installation script will setup everything need to choose some option interactively from setup.Script can work for all linux environments.</p> Python script to find linux distros details from distrowatch (Python) 2015-07-29T18:24:23-07:00Emil george jameshttp://code.activestate.com/recipes/users/4191910/http://code.activestate.com/recipes/579038-python-script-to-find-linux-distros-details-from-d/ <p style="color: grey"> Python recipe 579038 by <a href="/recipes/users/4191910/">Emil george james</a> (<a href="/recipes/tags/beautifulsoup/">beautifulsoup</a>, <a href="/recipes/tags/internet/">internet</a>, <a href="/recipes/tags/module/">module</a>, <a href="/recipes/tags/python/">python</a>, <a href="/recipes/tags/url/">url</a>, <a href="/recipes/tags/web/">web</a>). </p> <p>this script is a simlpe python script to find linux distros details from distrowatch using beautifulsoup,urllib2 modules.The script finds distros distribution details from <a href="http://distrowatch.com" rel="nofollow">distrowatch.com</a> when the distribution name is called as argument.</p> Composing a POSTable HTTP request with multipart/form-data Content-Type to simulate a form/file upload. (Python) 2014-03-08T17:34:38-08:00István Pásztorhttp://code.activestate.com/recipes/users/4189380/http://code.activestate.com/recipes/578846-composing-a-postable-http-request-with-multipartfo/ <p style="color: grey"> Python recipe 578846 by <a href="/recipes/users/4189380/">István Pásztor</a> (<a href="/recipes/tags/field/">field</a>, <a href="/recipes/tags/file/">file</a>, <a href="/recipes/tags/form/">form</a>, <a href="/recipes/tags/html/">html</a>, <a href="/recipes/tags/httpclient/">httpclient</a>, <a href="/recipes/tags/mime/">mime</a>, <a href="/recipes/tags/multipart/">multipart</a>, <a href="/recipes/tags/post/">post</a>, <a href="/recipes/tags/upload/">upload</a>, <a href="/recipes/tags/web/">web</a>). Revision 5. </p> <p>This code is useful if you are using a http client and you want to simulate a request similar to that of a browser that submits a form containing several input fields (including file upload fields). I've used this with python 2.x.</p> Geocoding Lists via Google Maps (Python) 2012-05-11T05:06:27-07:00Mano Bastardohttp://code.activestate.com/recipes/users/4182040/http://code.activestate.com/recipes/578126-geocoding-lists-via-google-maps/ <p style="color: grey"> Python recipe 578126 by <a href="/recipes/users/4182040/">Mano Bastardo</a> (<a href="/recipes/tags/batch/">batch</a>, <a href="/recipes/tags/coordinates/">coordinates</a>, <a href="/recipes/tags/geocode/">geocode</a>, <a href="/recipes/tags/geocoding/">geocoding</a>, <a href="/recipes/tags/google/">google</a>, <a href="/recipes/tags/google_maps/">google_maps</a>, <a href="/recipes/tags/lat/">lat</a>, <a href="/recipes/tags/latitude/">latitude</a>, <a href="/recipes/tags/list/">list</a>, <a href="/recipes/tags/list_comprehension/">list_comprehension</a>, <a href="/recipes/tags/lng/">lng</a>, <a href="/recipes/tags/longitude/">longitude</a>, <a href="/recipes/tags/map/">map</a>, <a href="/recipes/tags/web/">web</a>). Revision 2. </p> <p>A simple script written as an experiment in geocoding addresses in a database. A list of addresses in the form of "100 Any Street, Anytown, CA, 10010" is passed to a Google Maps URL, and the latitude/longitude coordinates are extracted from the returned XML.</p> <p>XML methods are not used in this script, but simple string searches instead.</p> A Simple Webcrawler (Python) 2012-03-03T02:37:30-08:00Johnhttp://code.activestate.com/recipes/users/4181142/http://code.activestate.com/recipes/578060-a-simple-webcrawler/ <p style="color: grey"> Python recipe 578060 by <a href="/recipes/users/4181142/">John</a> (<a href="/recipes/tags/crawler/">crawler</a>, <a href="/recipes/tags/html/">html</a>, <a href="/recipes/tags/page/">page</a>, <a href="/recipes/tags/parser/">parser</a>, <a href="/recipes/tags/scraping/">scraping</a>, <a href="/recipes/tags/urllib/">urllib</a>, <a href="/recipes/tags/urlopen/">urlopen</a>, <a href="/recipes/tags/web/">web</a>). </p> <p>This is my simple web crawler. It takes as input a list of seed pages (web urls) and 'scrapes' each page of all its absolute path links (i.e. links in the format <a href="http://" rel="nofollow">http://</a>) and adds those to a dictionary. The web crawler can take all the links found in the seed pages and then scrape those as well. You can continue scraping as deep as you like. You can control how "deep you go" by specifying the depth variable passed into the WebCrawler class function start_crawling(seed_pages,depth). Think of the depth as the recursion depth (or the number of web pages deep you go before returning back up the tree).</p> <p>To make this web crawler a little more interesting I added some bells and whistles. I added the ability to pass into the WebCrawler class constructor a regular expression object. The regular expression object is used to "filter" the links found during scraping. For example, in the code below you will see:</p> <p>cnn_url_regex = re.compile('(?&lt;=[.]cnn)[.]com') # cnn_url_regex is a regular expression object</p> <p>w = WebCrawler(cnn_url_regex)</p> <p>This particular regular expression says:</p> <p>1) Find the first occurence of the string '.com'</p> <p>2) Then looking backwards from where '.com' was found it attempts to find '.cnn'</p> <p>Why do this?</p> <p>You can control where the crawler crawls. In this case I am constraining the crawler to operate on webpages within cnn.com.</p> <p>Another feature I added was the ability to parse a given page looking for specific html tags. I chose as an example the &lt;h1&gt; tag. Once a &lt;h1&gt; tag is found I store all the words I find in the tag in a dictionary that gets associated with the page url.</p> <p>Why do this?</p> <p>My thought was that if I scraped the page for text I could eventually use this data for a search engine request. Say I searched for 'Lebron James'. And suppose that one of the pages my crawler scraped found an article that mentions Lebron James many times. In response to a search request I could return the link with the Lebron James article in it.</p> <p>The web crawler is described in the WebCrawler class. It has 2 functions the user should call:</p> <p>1) start_crawling(seed_pages,depth)</p> <p>2) print_all_page_text() # this is only used for debug purposes</p> <p>The rest of WebCrawler's functions are internal functions that should not be called by the user (think private in C++).</p> <p>Upon construction of a WebCrawler object, it creates a MyHTMLParser object. The MyHTMLParser class inherits from the built-in Python class HTMLParser. I use the MyHTMLParser object when searching for the &lt;h1&gt; tag. The MyHTMLParser class creates instances of a helper class named Tag. The tag class is used in creating a "linked list" of tags.</p> <p>So to get started with WebCrawler make sure to use Python 2.7.2. Enter the code a piece at a time into IDLE in the order displayed below. This ensures that you import libs before you start using them.</p> <p>Once you have entered all the code into IDLE, you can start crawling the 'interwebs' by entering the following:</p> <p>import re</p> <p>cnn_url_regex = re.compile('(?&lt;=[.]cnn)[.]com') </p> <p>w = WebCrawler(cnn_url_regex)</p> <p>w.start_crawling(['http://www.cnn.com/2012/02/24/world/americas/haiti-pm-resigns/index.html?hpt=hp_t3'],1)</p> <p>Of course you can enter any page you want. But the regular expression object is already setup to filter on <a href="http://cnn.com" rel="nofollow">cnn.com</a>. Remember the second parameter passed into the start_crawling function is the recursion depth.</p> <p>Happy Crawling!</p> Safe HTML string and unicode (Python) 2012-01-10T08:14:14-08:00Garel Alexhttp://code.activestate.com/recipes/users/2757636/http://code.activestate.com/recipes/578008-safe-html-string-and-unicode/ <p style="color: grey"> Python recipe 578008 by <a href="/recipes/users/2757636/">Garel Alex</a> (<a href="/recipes/tags/html/">html</a>, <a href="/recipes/tags/security/">security</a>, <a href="/recipes/tags/web/">web</a>). Revision 2. </p> <p>As you display message on a web page, you have to sanitize input data coming from users to avoid <a href="https://en.wikipedia.org/wiki/Cross-site_scripting">XSS</a>. Here is a small recipe where we can use a special class for our string to be sure we get safe all the way long.</p> Get user's IP address even when they're behind a proxy (Python) 2011-07-15T21:19:17-07:00Ben Hoythttp://code.activestate.com/recipes/users/4170919/http://code.activestate.com/recipes/577795-get-users-ip-address-even-when-theyre-behind-a-pro/ <p style="color: grey"> Python recipe 577795 by <a href="/recipes/users/4170919/">Ben Hoyt</a> (<a href="/recipes/tags/address/">address</a>, <a href="/recipes/tags/cgi/">cgi</a>, <a href="/recipes/tags/ip/">ip</a>, <a href="/recipes/tags/web/">web</a>, <a href="/recipes/tags/webpy/">webpy</a>). </p> <p>Function to get the user's IP address in a web app or CGI script, even when they're behind a web proxy.</p> <p>We use web.py as our web framework, but change web.ctx.env and web.ctx.get('ip') to whatever the equivalents are for the CGI environment variables and REMOTE_ADDR are in your framework.</p> Web based Query Browser (PHP) 2011-06-15T03:54:02-07:00Jonathan Fenechhttp://code.activestate.com/recipes/users/4169413/http://code.activestate.com/recipes/577753-web-based-query-browser/ <p style="color: grey"> PHP recipe 577753 by <a href="/recipes/users/4169413/">Jonathan Fenech</a> (<a href="/recipes/tags/based/">based</a>, <a href="/recipes/tags/browser/">browser</a>, <a href="/recipes/tags/php/">php</a>, <a href="/recipes/tags/query/">query</a>, <a href="/recipes/tags/web/">web</a>). </p> <p>Query browser works </p> <p>add password to this part of the code if you require a password for mysql</p> <p>Code =</p> <p>// Connect to the database $conn = mysql_connect('localhost', 'root' 'PASSWORD GOES HERE");</p> ActiveState recipe statistics (Python) 2011-06-02T14:52:50-07:00Kaan Ozturkhttp://code.activestate.com/recipes/users/4178179/http://code.activestate.com/recipes/577732-activestate-recipe-statistics/ <p style="color: grey"> Python recipe 577732 by <a href="/recipes/users/4178179/">Kaan Ozturk</a> (<a href="/recipes/tags/html/">html</a>, <a href="/recipes/tags/regular_expressions/">regular_expressions</a>, <a href="/recipes/tags/statistics/">statistics</a>, <a href="/recipes/tags/urllib2/">urllib2</a>, <a href="/recipes/tags/web/">web</a>). Revision 2. </p> <p>Downloads "All Recipe Authors" pages in ActiveState, uses regular expressions to parse author name and number of their recipes on each page. Finally, it displays the recipe submission distribution (the count of how many authors have submitted how many recipes each).</p> url_spider (Python) 2011-03-14T09:08:28-07:00amir naghavihttp://code.activestate.com/recipes/users/4177294/http://code.activestate.com/recipes/577608-url_spider/ <p style="color: grey"> Python recipe 577608 by <a href="/recipes/users/4177294/">amir naghavi</a> (<a href="/recipes/tags/database/">database</a>, <a href="/recipes/tags/regex/">regex</a>, <a href="/recipes/tags/web/">web</a>). Revision 3. </p> <p>a simple url spider that goes through web pages and collects urls.</p> Download all lolcat images from iCanHasCheezburger.com (Python) 2011-03-10T08:49:14-08:00Rahul Anandhttp://code.activestate.com/recipes/users/4173646/http://code.activestate.com/recipes/577603-download-all-lolcat-images-from-icanhascheezburger/ <p style="color: grey"> Python recipe 577603 by <a href="/recipes/users/4173646/">Rahul Anand</a> (<a href="/recipes/tags/download/">download</a>, <a href="/recipes/tags/images/">images</a>, <a href="/recipes/tags/lolcat/">lolcat</a>, <a href="/recipes/tags/python/">python</a>, <a href="/recipes/tags/web/">web</a>). </p> <p>Running this python script will download all lolcat images from <a href="http://icanhascheezburger.com" rel="nofollow">icanhascheezburger.com</a> to the current folder. Download will start from the oldest image. Images are collected into subfolders lolcat0, lolcat1 etc, each containing 300 images. The script can be stopped and resumed at anytime. Make sure to create files <em>lolconfig.txt</em> and <em>log.txt</em> in the same folder before running the script. <em>lolconfig.txt</em> must have a string as follows in the beginning: <em>1496/1496/0</em>. log.txt is an empty file in the beginning</p> LoggingWebMonitor - a central logging server and monitor. (Python) 2010-02-02T01:56:42-08:00Gabriel Genellinahttp://code.activestate.com/recipes/users/924636/http://code.activestate.com/recipes/577025-loggingwebmonitor-a-central-logging-server-and-mon/ <p style="color: grey"> Python recipe 577025 by <a href="/recipes/users/924636/">Gabriel Genellina</a> (<a href="/recipes/tags/client_server/">client_server</a>, <a href="/recipes/tags/debugging/">debugging</a>, <a href="/recipes/tags/distributed/">distributed</a>, <a href="/recipes/tags/logging/">logging</a>, <a href="/recipes/tags/remote/">remote</a>, <a href="/recipes/tags/sysadmin/">sysadmin</a>, <a href="/recipes/tags/web/">web</a>). Revision 3. </p> <p>LoggingWebMonitor listens for log records sent from other processes running in the same box or network. Collects and saves them concurrently in a log file. Shows a summary web page with the latest N records received.</p> GAE User Session with HTTP Basic Authentication (Python) 2010-05-20T23:49:49-07:00Berendhttp://code.activestate.com/recipes/users/4173891/http://code.activestate.com/recipes/577235-gae-user-session-with-http-basic-authentication/ <p style="color: grey"> Python recipe 577235 by <a href="/recipes/users/4173891/">Berend</a> (<a href="/recipes/tags/appengine/">appengine</a>, <a href="/recipes/tags/appspot/">appspot</a>, <a href="/recipes/tags/authentication/">authentication</a>, <a href="/recipes/tags/clients/">clients</a>, <a href="/recipes/tags/gae/">gae</a>, <a href="/recipes/tags/google/">google</a>, <a href="/recipes/tags/python/">python</a>, <a href="/recipes/tags/sessions/">sessions</a>, <a href="/recipes/tags/web/">web</a>, <a href="/recipes/tags/wsgi/">wsgi</a>). Revision 6. </p> <p>HTTP Basic is an unsecure but easy to implement authentication protocol. I think its good enough for a simple client in front of an SSL capable server. Google App-Engine supports SSL, and here is a recipe to set up the user-session using HTTP Basic. </p> <p>gauth has the code from my not-really-a-recipe listing at: <a href="http://code.activestate.com/recipes/577217-routines-for-programmatically-authenticating-with-" rel="nofollow">http://code.activestate.com/recipes/577217-routines-for-programmatically-authenticating-with-</a></p> ur1.ca command-line client (Python) 2011-03-23T05:27:27-07:00Conghttp://code.activestate.com/recipes/users/4167149/http://code.activestate.com/recipes/577236-ur1ca-command-line-client/ <p style="color: grey"> Python recipe 577236 by <a href="/recipes/users/4167149/">Cong</a> (<a href="/recipes/tags/scraping/">scraping</a>, <a href="/recipes/tags/shortening/">shortening</a>, <a href="/recipes/tags/url/">url</a>, <a href="/recipes/tags/web/">web</a>). Revision 2. </p> <p>(ur1.ca)[http://ur1.ca/] is the URL shortening services provided by <a href="http://status.net" rel="nofollow">status.net</a>. This script makes it possible to access the service from the command line. This is done by scraping the returned page and look for the shortened URL.</p> Userfriendly Webpage Template (Python) 2010-05-04T07:38:03-07:00david.gaarenstroomhttp://code.activestate.com/recipes/users/4168848/http://code.activestate.com/recipes/577203-userfriendly-webpage-template/ <p style="color: grey"> Python recipe 577203 by <a href="/recipes/users/4168848/">david.gaarenstroom</a> (<a href="/recipes/tags/cgi/">cgi</a>, <a href="/recipes/tags/html/">html</a>, <a href="/recipes/tags/httpserver/">httpserver</a>, <a href="/recipes/tags/mvc/">mvc</a>, <a href="/recipes/tags/template/">template</a>, <a href="/recipes/tags/web/">web</a>, <a href="/recipes/tags/webdesign/">webdesign</a>, <a href="/recipes/tags/webpagetemplate/">webpagetemplate</a>). Revision 5. </p> <p>User friendly template class targeted towards Web-page usage and optimized for speed and efficiency.</p> <p>Tags can be inserted in a template HTML file in a non-intrusive way, by using specially formatted comment strings. Therefore, the template-file can be viewed in a browser, even with prototype data embedded in it, which will later be replaced by dynamic content. Also, webdesigners can continue to work on the template and upload it without further modification.</p> Method-based URL dispatcher for the Tornado web server (Python) 2009-11-20T11:51:47-08:00Dan McDougallhttp://code.activestate.com/recipes/users/4169722/http://code.activestate.com/recipes/576958-method-based-url-dispatcher-for-the-tornado-web-se/ <p style="color: grey"> Python recipe 576958 by <a href="/recipes/users/4169722/">Dan McDougall</a> (<a href="/recipes/tags/dispatcher/">dispatcher</a>, <a href="/recipes/tags/shortcuts/">shortcuts</a>, <a href="/recipes/tags/subclass/">subclass</a>, <a href="/recipes/tags/tornado/">tornado</a>, <a href="/recipes/tags/url/">url</a>, <a href="/recipes/tags/web/">web</a>). Revision 5. </p> <p>The MethodDispatcher is a subclass of <a href="http://www.tornadoweb.org/">tornado</a>.web.RequestHandler that will use the methods contained in subclasses of MethodDispatcher to handle requests. In other words, instead of having to make a new RequestHandler class for every URL in your application you can subclass MethodDispatcher and use the methods contained therein <em>as</em> your URLs.</p> <p>The MethodDispatcher also adds the convenience of automatically passing arguments to your class methods. So there is no need to use Tornado's get_argument() method.</p> <h5><strong>Example</strong></h5> <p>To demonstrate the advantages of using MethodDispatcher I'll present a standard Tornado app with multiple URLs and re-write it using MethodDispatcher...</p> <h5><strong>The standard Tornado way</strong></h5> <pre class="prettyprint"><code>class Foo(tornado.web.RequestHandler): def get(self): self.write('foo') class Bar(tornado.web.RequestHandler): def get(self): self.write('bar') class SimonSays(tornado.web.RequestHandler): def get(self): say = self.get_argument("say") self.write('Simon says, %s' % `say`) application = tornado.web.Application([ (r"/foo", Foo), (r"/bar", Bar), (r"/simonsays", SimonSays), ]) </code></pre> <h5><strong>The MethodDispatcher way</strong></h5> <pre class="prettyprint"><code>class FooBar(MethodDispatcher): def foo(self): self.write("foo") def bar(self): self.write("bar") def simonsays(self, say): self.write("Simon Says, %s" % `say`) application = tornado.web.Application([ (r"/.*", FooBar) ]) </code></pre> <h5><strong>Notes</strong></h5> <p>As you can see from the above example, using the MethodDispatcher can significantly reduce the complexity of Tornado applications. Here's some other things to keep in mind when using the MethodDispatcher:</p> <ul> <li>MethodDispatcher will ignore any methods that begin with an underscore (_). This prevents builtins and private methods from being exposed to the web.</li> <li>The '/' path is special: It always maps to self.index().</li> <li>MethodDispatcher does not require that your methods distinquish between GET and POST requests. Whether a GET or POST is performed the matching method will be called with any passed arguments or POSTed data. Because of the way this works you should not use get() and post() in your MethodDispatcher subclasses unless you want to override this functionality.</li> <li>When an argument is passed with a single value (/simonsays?say=hello) the value passed to the argument will be de-listed. In other words, it will be passed to your method like so: {'say': 'hello'}. This overrides the default Tornado behavior which would return the value as a list: {'say': ['hello']}. If more than one value is passed MethodDispatcher will use the default behavior.</li> </ul> Using proxy connection for QWebView (Python) 2009-10-02T02:32:51-07:00Keisuke URAGOhttp://code.activestate.com/recipes/users/668964/http://code.activestate.com/recipes/576921-using-proxy-connection-for-qwebview/ <p style="color: grey"> Python recipe 576921 by <a href="/recipes/users/668964/">Keisuke URAGO</a> (<a href="/recipes/tags/browser/">browser</a>, <a href="/recipes/tags/gui/">gui</a>, <a href="/recipes/tags/pyqt/">pyqt</a>, <a href="/recipes/tags/qt4/">qt4</a>, <a href="/recipes/tags/web/">web</a>). Revision 4. </p> <p>QWebView is powerful web browser. This script can use a http-proxy host. Log file name is minibrowser.log in same directory.</p>