AI News, How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda

How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda

Without using a C++ compiler my setup (see below) works well (but slow of course).

I used the code example of your book “Deep Learning with Python\Develop your first neural network with Keras (page 47, capter 7.8)”. To

know that I didn’t follow exactly your installation guide on this site but it seems to me there’s only a penny to success.

Python 3.6.1 (v3.6.1:69c0db5) [MSC v.1900 64 bit (AMD64)] on win32 –

File “C:\Python36\lib\site-packages\keras\engine\topology.py”, line 391, in add_weight weight = K.variable(initializer(shape), dtype=dtype, name=name)

File “C:\Python36\lib\site-packages\keras\backend\theano_backend.py”, line 2191, in random_uniform return rng.uniform(shape, low=minval, high=maxval, dtype=dtype)

File “C:\Python36\lib\site-packages\theano\sandbox\rng_mrg.py”, line 1354, in uniform rstates = self.get_substream_rstates(nstreams, dtype)

File “C:\Python36\lib\site-packages\theano\sandbox\rng_mrg.py”, line 66, in multMatVect[A_sym, s_sym, m_sym, A2_sym, s2_sym, m2_sym], o, profile=False)

File “C:\Python36\lib\site-packages\theano\gof\cmodule.py”, line 302, in dlimport rval = __import__(module_name, {}, {}, [module_name]) ImportError:

Error in pywrap_tensorflow.py after installing succesful tensorflow #7623

Ok, still a newbee problem here ...

Reading this chain I tried to install the Visual C++ 2015 redistributable (x64 version) per @Carmezim.

However that simply tosses an installer error that it cannot install over a newer version already on my system.

So I also tried manually putting the existing msvcp140.dll into my User Variables %PATH% (as C:\Windows\System32\msvcp140.dll) which similarly fails to solve the problem.

C:\Users\jeffh>pythonPython 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 16:02:32) [MSC v.1900 64 bit (AMD64)] on win32 Type

During handling of the above exception, another exception occurred: Traceback (most recent call last): File

No module named '_pywrap_tensorflow_internal' During handling of the above exception, another exception occurred: Traceback (most recent call last): File

'', line 1, in File 'C:\Users\jeffh\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow_init_.py', line 24, in from tensorflow.python import * File

Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.

Mocha tests run serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correct test cases.

Install with npm globally: or as a development dependency for your project: To install Mocha v3.0.0 or newer with npm, you will need npm v2.14.2 or newer.

In your editor: Back in the terminal: Set up a test script in package.json: Then run tests with: If you use callback-based async tests, Mocha will throw an error if done() is called multiple times.

This is useful if the APIs you are testing return promises instead of taking callbacks: The latter example uses Chai as Promised for fluent promise assertions.

In Mocha v3.0.0 and newer, returning a Promise and calling done() will result in an exception, as this is generally a mistake: The above test will fail with Error: Resolution method is overspecified.

If your JS environment supports async / await you can also write asynchronous tests like this: When testing synchronous code, omit the callback and Mocha will automatically continue on to the next test.

For example, you may wish to populate database with dummy content before each test: You may also pick any file and add “root”-level hooks.

This will cause the callback to beforeEach() to run before any test case, regardless of the file it lives in (this is because Mocha has an implied describe() block, called the “root suite”).

This will attach a special callback function, run(), to the global context: “Pending”–as in “someone should write these test cases eventually”–test-cases are simply those without a callback: Pending tests will be reported as such.

Here’s an example of executing an individual test case: Previous to v3.0.0, .only() used string matching to decide which tests to execute.

In v3.0.0 or newer, .only() can be used multiple times to define a subset of tests to run: You may also choose multiple suites: But tests will have precedence: Note: Hooks, if present, will still be executed.

To skip multiple tests in this manner, use this.skip() in a “before” hook: Before Mocha v3.0.0, this.skip() was not supported in asynchronous tests and hooks.

No special syntax is required — plain ol’ JavaScript can be used to achieve functionality similar to “parameterized” tests, which you may have seen in other frameworks.

Take the following example: The above code will produce a suite with three specs: Many reporters will display test duration, as well as flagging tests that are slow, as shown here with the “spec” reporter:

To tweak what’s considered “slow”, you can use the slow() method: Suite-level timeouts may be applied to entire test “suites”, or disabled via this.timeout(0).

Test-specific timeouts may also be applied, or the use of this.timeout(0) to disable timeouts all together: Hook-level timeouts may also be applied: Again, use this.timeout(0) to disable the timeout for a hook.

In v3.0.0 or newer, a parameter passed to this.timeout() greater than the maximum delay value will cause the timeout to be disabled.

it’s indicative of tests (or fixtures, harnesses, code under test, etc.) which don’t clean up after themselves properly.

To ensure your tests aren’t leaving messes around, here are some ideas to get started: Updated in Mocha v4.0.0 --compilers is deprecated as of Mocha v4.0.0.

Note the difference between mocha debug and mocha --debug: mocha debug will fire up node’s built-in debug client, mocha --debug will allow you to use a different interface — such as the Blink Developer Tools.

By using this option in conjunction with --check-leaks, you can specify a whitelist of known global variables that you would expect to leak into global scope.

The --require option is useful for libraries such as should.js, so you may simply --require should instead of manually invoking require('should') within each test file.

Accepts multiple --file flags to include multiple files, the order in which the flags are given are the order in which the files are included in the test suite.

The keys before, after, beforeEach, and afterEach are special-cased, object values are suites, and function values are test-cases: The QUnit-inspired interface matches the “flat” look of QUnit, where the test suite title is simply defined before the test-cases.

Failures highlight in red exclamation marks (!), pending tests with a blue comma (,), and slow tests as yellow.

The “list” reporter outputs a simple specifications list as test cases pass or fail, outputting the failure details at the bottom of the output.

The “JSON stream” reporter outputs newline-delimited JSON “events” as they occur, beginning with a “start” event, followed by test passes or failures, and then the final “end” event.

For example, suppose you have the following JavaScript: The command mocha --reporter doc array would yield: The SuperAgent request library test documentation was generated with Mocha’s doc reporter using this Bash command: View SuperAgent’s Makefile for reference.

typical setup might look something like the following, where we call mocha.setup('bdd') to use the BDD interface before loading the test scripts, running them onload with mocha.run().

Examples: The following option(s) only function in a browser context: noHighlighting: If set to true, do not attempt to use syntax highlighting on output test code.

With this, you may then invoke mocha with additional arguments, here enabling Growl support, and changing the reporter to list: By default, mocha looks for the glob ./test/*.js, so you may want to put your tests in test/ folder.

If you want to include sub directories, use --recursive, since ./test/*.js only matches files in the first level of test and ./test/**/*.js only matches files in the second level of test.

Wallaby.js is a continuous testing tool that enables real-time code coverage for Mocha with any assertion library in VS Code, Atom, JetBrains IDEs (IntelliJ IDEA, WebStorm, etc.), Sublime Text and Visual Studio for both browser and node.js projects.