Creating a new Angular Site on Google Firebase

Angular is a framework for creating front-ends for websites. Technically, the site is static from a web hosting perspective, meaning that in and of itself it does not have a database, but rather only serves static files from the web server. From a user perspective, it can appear to be very dynamic, with the ability to log in, create and work with data, and other normal web application behaviors.

The dynamic portions of an Angular app require integration with back-end services. Angular apps simply call out to these external services, and are not typically a part of providing the services.

The creation of these services is both out of scope for angular and out of scope for this tutorial. Instead, we’ll create a very simple static website with Angular and deploy it to Firebase.

Firebase is a set of services provided by Google. One service is hosting, which primarily hosts static websites such as those created with Angular. Google offers a free tier that allows you to get a site up and running quickly and easily.


You have the following installed:

  • Node 10+ (node -v)
  • NPM 6+ (npm -v)
  • Angular CLI 8+ (ng --version)

Create Angular app

In the folder where you want to create the app, run

ng new <your-app-name>

It will ask you questions. If you are not sure what answer to choose, you can use the defaults. Routing allows you to have URLs that point to pages, which technically simulates multiple pages even though it is still a “single page” application. This is useful if you have menu navigation. The CSS options allows you to use CSS templates to add functionality.

Launch your app:

npm start

Open in your browser: http://localhost:4200/

You can now edit the src/app/app.component.html file to make it your own. When it looks good, build it for production.

ng build --prod

This creates static output in a dist folder in your project. This is what we’ll deploy to Firebase.

Deploy to Firebase

Use your Google account to log into Then click on “Go to console” in the top right. Create a new project and give it a name. Now you can come back to the console and drill into your project at any time.

Intall Firebase on the computer with your Angular project with the following command. This will update it if you already have it installed. Prefix with sudo if it fails with permission errors without it.

npm i -g firebase-tools

You will primarily use the command firebase to interact with it. You can use these commands to log into and out of a Google account:

firebase login
firebase logout

While in your project’s root directory, run

firebase init

This will fire off options to choose from. Begin by selecting Hosting. Select existing project if you created via the web console, or new otherwise. For your public directory, look in your dist folder. if it created another folder under it for your project, then you’ll include that. Let’s say your project is called helloworld, your public folder would be:


You can choose single page app since that is what Angular created.

Once it is done setting up the project, your process for deploying to Firebase will be to do a prod build to repopulate the dist folder, then to run a Firebase deploy command:

ng build --prod
firebase deploy

Posted in Angular, Development, Technology, Web | Tagged , , , , | Leave a comment

Protected: The Super Rich, News and Profits

This content is password protected. To view it please enter your password below:

Posted in Finance, Investing, Trading | Enter your password to view comments.

Using JSONP in Angular 8

JSONP is one way to get around CORS when getting data from a REST service. It does require a callback feature on the part of the service, which isn’t readily clear in some of the blogs and samples online for JSONP. With or without JSONP, the server side has the ability to disable CORS. This could be useful if a service you want to call, and cannot modify, supports JSONP.

If you are adding support for it in your rest, you’ll be checking for a callback parameter, which we’ll make “callback” in this example. If your REST is in Java, you’d include this parameter:

@QueryParam("callback") String callback

Then your output might look like this:

if (callback != null) {
if (callback != null) {

In your component’s module, you’ll need to add this to your @NgModule imports:


imported with

import { HttpClientJsonpModule, HttpClientModule } from '@angular/common/http';

The odd thing I discovered was that you need HttpClientModule in your component’s module even if it is already declared in your application module. This isn’t very logical because the HttpClient worked just fine in my components with this only being in app.module.ts. But, when I tried to use JSONP, I kept getting this error in the JavaScript console:

Error: “Attempted to construct Jsonp request without JsonpClientModule installed.”

Yeah, I’m writing this blog to hopefully save another poor soul the torture of trying to connect those dots. I couldn’t find that error anywhere on the Internet; and, with HttpClient working just fine in the same component without JSONP, why do you need HttpClientModule in your submodule just for JSONP?

In your Angular service or component, you’ll then be able to use JSONP to call your service:

  this.http.jsonp(url, 'callback')
      catchError(this.handleError('jsonpTest', 'ERROR')),
    ).subscribe(data => {
      this.log('data: ' + JSON.stringify(data));

That second parameter in your jsonp call is the name of the parameter it will pass to the back-end. The value of that parameter passed to your service will be defined by JSONP.

Posted in Angular, Development, Technology, Web | Tagged , , , | Leave a comment

Channels On Charts Need Log Scale

Log scale is an option you can turn on when doing most financial charting.  Your software may default to off, so you need to check this.  You should really have log scale on your trading and investing charts all the time.  However, the case where you really need to have it is long-term trends and channels

The reason is simple.  If you look at when NVDA doubled from $20 on, it is 40, 80, 160 and, if it made it, 320.  So if you invested $10k at $80, you got the same 100% return on investment that you obtained when it went from $20 to $40.  In other words, in capital markets, it is percent return on investment that matters, or growth.  Not actual dollar amounts of individual stocks.  You only care how much your $10k will make. 

If you don’t use log scale, it will appear as though trends shoot up like a rocket, as it might go from $80 to 160 in the same amount of time it went from $20 to $40.  This creates a false breakout.  You won’t know if something truly breaks above the channel top unless you are using log scale to smooth it out into percent gains. 

So, here is a chart of NVDA with log scale enabled.  A nice clean channel.  You can see via weekly bars when it broke, and you can see the result.   Down down down.  Some people will note the market tanked during this time.  But isn’t that really a chicken and egg herring?  You have FB and NVDA breaking their channel, and plunging as a result.  Doesn’t this contribute the velocity of the market drop, especially with correlations? 

Now, look at this same chart where the only difference is log scale is disabled:

Looks very different.  You might of had some sort of channel for the past year, but not over this larger time frame going back to 2015.  A move from 20 to 300 is 15x.  investors from 20 to 160 quadrupled their investment, while those jumping in at 150 no more than doubled it. 

To understand it clearly, look at the distances between the prices on the Y axis.  With log scale, 20 and 40 are a lot further apart than 260 and 280, despite both being $20 differences.  If you bought at 260 and sold at 280, did you double your money? Or was it a smaller move for you than the one who bought 20 and sold 40? 




Posted in Finance, Investing, Trading | Leave a comment

SPX 2009 Channel Bottom Touched – First In 2018

On Friday, Oct 27th, 2008, SPX touched it’s 2009 channel bottom near 2630 for the first time since Feb 2016!  Predictably, it bounced over 60 points before coming back down.  First touches are nearly always bought hard. 

Being a Friday, when you have weekly close, it is unlikely it would close below 2630.  It just doesn’t do that on a bull run channel bottom on first touch in years.  But that doesn’t preclude the possibility of it testing that low, and possibly lower, as early as Monday.  Measured move targets on multiple time frames are at 2606 and 2601.  That will likely be bought.  But, in case it goes lower, the weekly 100 SMA on SPX is 2675.  Then, of course, you have the prior Feb low near 2532. 

If we hit a bottom this week, and it reverses, then what?  Good question.  The number of weeks between tests of the bull run channel low have ranged from 4 weeks in 2016 to 13-14 weeks in prior bull runs.  Keep in mind that every time is different, though, so this is a guide, not a guaranteed outcome. What’s behind this one, in addition to rising rates, is the simultaneous $75b of US treasuries being sold by the US Treasury (UST) per month to finance our budget deficit, as well as $50b of US treasuries being sold by the Federal Reserve Bank (FRB) as part of their balance sheet reduction, aka Quantitative Tightening (QT), as well as who knows what else they are selling.  This last part is unprecedented.

$75b per month is high enough.  Usually the FRB is buying some of those.  But, now, instead of buying, the FRB is selling, bringing the total to a historically high $125b per month!

That said, let’s look at what happened in 2016. 

In 2016, it did a perfect 50% touch before going down.  In SPX, a perfect touch is within 1 point.  Then it went back down to test the low.  Then up up up and away. 

Now our current 2018 setup puts half way back (HWB) at 2784. 

Keep in mind, though, that the bottom before going up may not be in, yet.  Monday can easily put in a lower low and hit our target of 2606.  In that case, we’ll need to redraw this fib set to locate our new HWB, lower than 2784. If we our new low is 2606, then HWB becomes 2772.93. 

The point is, based on prior first touches of channel bottom, this is a buy for a HWB run up.  If we get a weekly open significantly below this channel bottom, like below 2600, then it’s game over, and you’ll look to get out, ideally for a small profit.  But risk/reward is very much in your favor down here.  You can’t get a much better swing low setup long.

Here’s the big picture 2009 bull channel:


Posted in Finance, Investing, Trading | Leave a comment

Big Brother Watch Update, Aug 2018


Google helps China control citizens


Copyright is the new poster child for censorship in EU

New Copyright Powers, New “Terrorist Content” Regulations: A Grim Day For Digital Rights in Europe



Posted in Uncategorized | Leave a comment

The Magic Pill – Keto Diet Documentary

The Magic Pill is a documentary on the Keto diet available on Netflix.  The Keto diet basically is about returning to how we naturally ate before industrialization of food, specifically, eating more fat and less carbs. 
My take on The Magic Pill is there is significant merit in the overall principles.  Yet, it could of helped its cause by being more objective in its presentation. 
On fruit…  There is NOTHING in any study that verifies fruit is a problem; and, by leaving it out, they contradict the “natural foods we ate for thousands of years” principles.  To be sure, they didn’t demonize fruit in this.  But, they basically excluded it from the list of things to embrace.  I’ve heard others speak of it as just another source of carbs, ignoring that it is very low on the glycemic index.  The whole theory on carbs causing diabetes is in part premised that the carbs are high on the glycemic index, flooding the body too fast with glucose, which is why countries that eat a lot of rice, high on the glycemic index, tend to have a high diabetes rates.  It also ignores the many healthy benefits of fruit, including the obvious vitamins, live enzymes you can only get from fresh fruit, and many unknown nutritional properties we are just beginning to realize helps our health.  Long live fruit!  I would say, instead of getting rid of carbs, go back to how we ate carbs for thousands of years.  Eat fruit!
One misnomer of theirs is how people used to be fit. That is true. But, they went with the assumption it was solely because of diet. The primary reason we were all once fit is because we didn’t work in offices and have cars!!!  Work on a farm all day without gas powered equipment or landscape, and you’ll be a lot more fit than if you work in an office.  Note that even in high labor fields machines are replacing human labor, so even farmers might be getting less exercise today if they sat on a machine all day.  Prior to modern living, each day was filled with exercise we no longer get. Today, you need a gym membership.
That said, I believe evidence has been mounting to support some of their dietary claims.  You just have to view it as a pendulum as people can go from one extreme to another. EVERYONE in the debate is guilty of making false assumptions. One may have more truth than the other. But you have to ask what assumptions they are purporting.
Saying “I’ll listen to you when you have 40,000 years of research” in the beginning was childish.” No one has 40,000 years of research.  They should of cut that from video. On top of that, they had the people GUESS what killed people before infectious diseases. No one knew! They guessed “old age”.  But, no one presented real data. 
For the record, infectious diseases have ALWAYS killed people.  They called that pestilence.  As long as there were cities, pestilence killed people.  We have been building cities for thousands of years.  It is natural for us, beginning with villages, and growing to markets and ports, to create cities. 
One thing I learned is the average person makes over 1,000 false assumptions per day. We’re wired to do this so we can make quick decisions in a world of unknowns.  Drive at night in a now blizzard on a country freeway with no lights and no traffic other than you.  If you make it home without going off the road, ask yourself how you did it.   Were you always 100% sure what the outcome of every decision would be?  Did you always know where the road was going?
I first learned this as a computer programmer when I was a kid, where your bugs in the first few years are primarily caused by your own incorrect assumptions. Once you realize how true it is, you’ll get better at identifying them in every discussion.  The better you get at identifying assumptions, the quicker you can get to the truth. 
I do strongly believe society has become way too dependent on pills. and, taking pills to solve problems caused by poor diet is ridiculous. That was one of the core things in The Magic Pill I agreed very strongly with.  The obvious conclusion is that we can impact our health a lot by changing our diet. 
My principle is eat what i want. But, if something is more healthy, simply try to eat more of it. If you eat more of something healthy, you’ll end up eating less of the unhealthy things. This is how you can grow without the issues of a restricted diet, where you’ll binge or fall back due to craving. If you have cravings, just try to fill them with something more healthy. Don’t get religious about absolute abstention. I identified drinking pop half the day, every day, as a health concern.  Yet, I’ll drink it on occasion, and overall replace it with something else i love in the meantime that is less harmful. 
That is how I approach Keto.  Give up my pasta, pizza, etc,…?  Heck no!!!  But, why not cut out carbs that I don’t really love, like junk food or processed meals, and eat more healthy fats I enjoy?  I don’t believe we can eat too much fruit and vegetables.  So, I load up on the ones I love, and prepare them in a way I enjoy, which sometimes is adds just olive oil and salt, and other times butter and cheese.  Every step we take towards better health adds up. 
Being happy in the process drives you to continue.   Learning what is healthy, and what isn’t, is the beginning of that process.  Find healthy things you love!  And eat more of them. 
Posted in Fun, Learning | Leave a comment

Testing Galera in Kubernetes

I deployed a 3 node Galera cluster in Kubernetes. Galera clusters MariaDB or MySQL, allowing you to read and write to all nodes while always being consistent (ACID) at all times. Kubernetes is a deployment environment for container applications. 

Here are key features of Galera:

– It uses MariaDB instances, and then uses a plug-in to cluster them. So, you are still using stock MariaDB instances.

– It uses InnoDB table types, which has been my default since introduced, and now the OTB default. It introduced ACID to MySQL long ago.

– Every node is a master/slave. So you can write to any node.

– Unlike typical horizontal clustering, which typically offers eventual consistency, this provides consistency across nodes at all times.

What this means is from a functional perspective, you can continue to use for your OLTP applications requiring ACID.

It’s primary benefit is when a node fails, as long as quorum is met (majority of nodes still up), the database remains available for transactions.

Enter Kubernetes (K8S), and a node failure is quickly remedied by K8S as soon as it can. I kill a node, it brings it back up within a minute or two. In the meantime, the other 2 of 3 nodes remain up, and continue to serve transactions since 2/3 is a majority. This is the primary benefit of Galera, and Kubernetes is the ideal environment for it.

While Galera doesn’t provide load balancing, K8S does, as you connect in K8S to the single service name that routes the connection to a node that is currently available.

I tested this, and added a row to a database to one of the up nodes while a node I just killed was being recreated automatically by K8S, yet still down. When the killed node was restored, it too had the new row in the table. So, new nodes “catch up” to missed transactions automatically.

I have not reviewed the performance impact; but, guaranteeing consistency across nodes 100% of the time has a performance cost when compared to a horizontal database with eventual consistency. Yet, performance is likely to be better than a single node since replication can be extremely efficient (think low level processing, without having to duplicate query processing). Your primary benefit, though, is higher availability.

Testing in Kubernetes

If you’d like to give it a whirl, here are instructions for how to test it. 

Create a cluster and deploy a 3 node Galera cluster. I had no problem deploying Galera in Google Cloud to a cluster using these 3 YAMLs

View in Kubernetes console

kubectl proxy

Access via


To use Skip and have Admin privileges, load dashboard-admin.yaml, which you can create per these instructions.

In order to test from a local db client, create a port-forward rule.  Here I use a different port because my local machine has its own instance of a MariaDB server listening on 3306.

# Listen on port 13306 locally for port 3306 of pod 'mysql-0'
 kubectl port-forward mysql-0 13306:3306

You can easily kill it and change the pod to jump around from one instance or another.  When I killed mysql-2, I inserted in mysql-0 while mysql-2 was still down.  Then when mysql-2 was back up, I changed the port forward to mysql-2 to verify it had the new row inserted while it was down.  Alternately, you can port forward to all 3 pods on 3 different ports.

To connect, use this from a local client instance where you have MariaDB or MySQL installed:

mysql -h -P 13306 -u root -p

To test the Galera cluster you can follow these instructions.


In addition to deleting the test cluster, you’ll need to delete the Persistence Volumes, which you can find under Google’s Compute Engine Disks if you are using GCP.  

Posted in Data, Technology | Tagged , , , , , , | Leave a comment

Added Charting to Automated Trading System

This is a continuation of Developing an Automated Trading System

Many of us use very robust charting software, including the popular thinkorswim platform, that does more than I plan to create in my system.  The requirement I ran into that could not be met by this software is the ability to chart unique data produced by my system that isn’t available to the third-party platforms, such as back testing results. 

Thus, I needed basic charting that allowed me to analyze things in the context of price history.  While a fully automated system won’t depend on charts, of course.  I — the human — play a role both in its development and improvement, as well as a cohesive role in automation.  To balance the human brain vs AI discussion, the goal is a “cyborg” in the beginning that becomes more and more machine as time passes. Parts that are proven to be successful in production will remain in the cyborg while new parts are vigorously tested. 

I had a few requirements when comparing charting libraries:

  1. Extensible free open-source.
  2. Works with Angular2, our choice for UI.
  3. Can do price history charting well (stock charts).
  4. Can easily add lines (studies and other calculations).
  5. Can update in real-time.

Other bells and whistles were considered, but those were the core requirements.  I chose ng2-nvd3 as it met these requirements and had nice bells and whistles such as zooming and resizing capability, and can be user interactive.  This is a 3-tier stack:

D3.js – a JavaScript library for manipulating documents based on data.
NVD3re-usable charts for d3.js.
ng2-nvd3Angular2 component for nvd3.

The center of the stack is NVD3, as ng2-nvd3 just provides an Angular2 interface to it. Interfacing via ng2-nvd3 worked well.  You have complete access to NVD3 capability.  It also updates the chart when you update the data, as you expect from an Angular2 component.  So, this completely met the Angular2 requirement.  

NVD3 is a bit limited, though.  They have a gallery of charts you can view.  It can produce a nice candlestick or OHLC chart with high, low, open and close bars.  But, you cannot add lines to these, and the multiChart option does not currently support candlestick or OHLC chart types.  The multiChart type includes area, line and bar charting only.  I can live with this limitation for now.  I just have to chart close prices of the original price history as a line, and additional lines for things such as MAs.

Extensibility. In the long-run I’ll one day want a candlestick charts with lines for MAs and other indicators.  I’ll also want lines for fibs, and other types of indicators, such as buy and sell signals, which might be up and down arrows, and other types of notation related to back testing.  There are two silver linings to the ng2-nvd3 stack. 

nvd3 is open source, so it can be easily improved if one is willing to learn the code.  You can copy and edit the Javascript files your installation is using, then optionally turn your changes into a pull request if you want them to become part of the project.  I talked to the primary committer on the nvd3 project, and he’s eager to accept pull requests.  While having updates committed to primary project isn’t necessary, it is ideal so you can continue to easily upgrade in the future as well as share your love.  

On top of this, you can use d3 on your current charts.  I’ve already used it for some non-graphical utilities.  Your code has access to everything ng2-nvd3 and nvd3 has access to, including, of course, the DOM model generated by it.  So, you can easily learn and use D3 yourself to enhance your charts, perhaps to add the buy/sell signals, without even changing the nvd3 code.

Developing with D3 and extending nvd3 involves a learning curve.  While I’m heavily immersed with Typescript in Angular2 — and loving it — this does force you back into old Javascript, as d3 and nvd3 are both written in Javascript, not Typescript.  These are by no means show stoppers.  However, it does impact prioritization of time.  For this reason, I’ve limited myself for now to what I can do out-of-the-box as it permits me to get back to the original reason I decided to add charting next — the ability to view back testing results and signals I create.  

User Interface

The UI consists of 3 Angular2 components.  One child for the price history query parameters.  Another child for adding studies.  And the parent that bring those inputs together and outputs the chart. 

This uses both the Angular2 @Input and @Output decorators that allow you to tie components together.   Because the chart automatically updates when the data changes due to data binding, including chart configuration, you can continue to add to and modify a chart after creating it using the controls. 

Angular2 Charting components

Because each child component requires the user to potentially update multiple fields before the chart can be updated correctly, each one has at least one button (Chart and Add).  When a button is pressed, the parent component receives the output and updates the chart.  Note that the StudyEntryComponent is in early stages of a WIP.  Yet, it can currently be used to add MAs to a chart. 

Charting Input Components

As you make modifications, clicking the Chart or Add buttons updates the chart.  You can also edit current MAs by selecting it, changing it and then clicking Chart.  The next image shows the table that is created as you add or edit MAs along with the resulting chart. 

Charting Output – Comparison with MAs

This chart demonstrates several features using nothing but out-of-the-box nvd3. 

If you resize the browser window, the chart automatically resizes.   While you can’t view the effect in the static image above, trust me, it works.  Have doubts? Check out the demos I linked to earlier.   

You can compare items using two different Y axis.  In this case, the Russell 2000 ($RUT.X) is on the right axis.  This currently creates studies for the underlying asset on the chart.  So, when we add an MA, it appears for both the S&P 500 ($SPX.X) and the Russell.  Being a two dimensional chart, you cannot have more then two Y axis.  If you included a third or more, they will share the right axis, which will be extended to handle the full range of possible values.  The choice of which axis an item belongs it is something you can control as you setup the data.  But, you cannot have a third Y axis.  So, you have to factor this into the design and how raw data is handled, with the impact on the Y range being your primary concern.  Combining an item that ranges from 0 to 2 with an item that ranges from 2000 to 2200 on one Y axis will result in two flat looking lines far apart.   

The user can interactively hide/show any of the lines by clicking the legend.  You can see above that $RUT.X 200 EMA-we and $RUT.X 50 SMA-mo are both hidden because their circles in the legend are not filled in.   

Another feature that differs from some charting software is that interval of the MAs is not limited to the interval of the chart.  While the chart is displaying weekly bars here, we added monthly MAs to the chart.  This is important because the algos will typically use one minute bars for historical data, and one or more per second real-time quote updates; yet, needs to be able to calculate MAs with intervals from 5 minutes to monthly. 

Round Trip Data Flow

Currently, when it needs to update the chart, it simply does a REST call for price history, which has the ability to add studies via parameters.  When those results come back, our UI side transforms the data using Typescript into the representation required to chart it, and simply replaces the data field in the ChartNVD3PriceComponent given to nvd3 to create the chart.  Due to data binding, the chart updates the instant this data is updated.

The REST call itself uses the parameters to construct and invoke a third-party API call.  Our facade to the API converts the raw data returned to POJOs.  Because our interface to the API uses caching, this could be in memory and returned instantly.  With price history in POJOs, our service then adds studies to the data as new fields.  Then, it converts the POJOs to JSON and returns it as the output of the REST.

Our Angular2 component receives this data, transforms into charting representation, and updates the chart data.  

Looking Forward

Adding charting to the application gets us started so we can begin to create JSON of back testing results that can be used to produce charts.  To add back testing results to charts, in Angular2, we’ll be creating a new UI component for defining back testing requirements, much like the one we created to add studies. 

The exception to simply using a one trip REST query might be if the back testing takes longer than it does today due to new complexity and permutations.  In that case, I’m likely to redesign it to simply add it to a back testing request queue; and allow the user to monitor the queue and view when available.  One advantage of this is that it can be viewed at any time later so long as it is on the list of queries that were previously queued.

WebSockets can be used to update the queue in the browser without the user having to click.  You will be able to see, in real-time, the progress of your request. 

WebSockets can be used to update the chart in real-time.  This will be important when  using real-time quotes and monitoring trading.  With the exception of the data coming through WebSockets instead of REST, we won’t need to really change how charting works in Angular2, as it currently updates the chart whenever the data changes.  The only difference will be how the data changes. Since we already use Angular2 for real-time updates of Level I and II quotes, monitoring of predictions, and order flow, using WebSockets to update a chart does not introduces a new technical feat. 


Posted in Finance, Investing, Technology, Trading, Web | Tagged , , , , | Leave a comment

Created Backtesting of Signals and Algos

This is a continuation of Developing an Automated Trading System

Began algorithms with simple strategies.  This tests a range of inputs for a strategy. For example, you can test a range of trailing stops from 1 to 15% with 0.5% steps. This will test 30 scenarios with the same data.  

You can combine strategies testing multiple ranges.  If your ranges include 10 target scenarios, and 10 stop scenarios, it will test 100 scenarios, as it will test every combination of your ranges.  There is no limit to the number of ranges you can combine. The REST call to create the backtest parses your strategies, creates entry/exit factories and iterates through the ranges.  

On the entry side, I’m creating indicators that can be used to fire signals.  While the signals are simple today (all true, all false), the logic can become complex as algos become aggregations of signals weighted to make a decision.  This will be fed to machine learning and use other techniques for prediction and optimization. 

Technical description: No new technology here.  This introduces a pattern of phased data enhancement.  

jvest-backtest-data-flowI was recently inspired by the AI series Westworld.  This led me to increase generification and conceptual streaming and phased data enhancement as I imagined the result being a high performance real-time analytics engine that could potentially handle complex decisions beyond the current application.  The goal here is to ultimately build an AI engine with practical purpose driving it rather than theory, as well as a real-time analytics engine that can be deployed to solve a number of problems in various industries. 

For this reason, the back testing algos are designed to support real-time price updates that include time so they can handle their own temporal requirements, much like the human brain continuously analyzing real-time signals to help you make decisions.

Continued posts on Developing an Automated Trading System

Added Charting to Automated Trading System (Jan 18, 2017)

Posted in Business, Finance, Investing, Technology, Trading, Web | Leave a comment