SPX 2009 Channel Bottom Touched – First In 2018

On Friday, Oct 27th, 2008, SPX touched it’s 2009 channel bottom near 2630 for the first time since Feb 2016!  Predictably, it bounced over 60 points before coming back down.  First touches are nearly always bought hard. 

Being a Friday, when you have weekly close, it is unlikely it would close below 2630.  It just doesn’t do that on a bull run channel bottom on first touch in years.  But that doesn’t preclude the possibility of it testing that low, and possibly lower, as early as Monday.  Measured move targets on multiple time frames are at 2606 and 2601.  That will likely be bought.  But, in case it goes lower, the weekly 100 SMA on SPX is 2675.  Then, of course, you have the prior Feb low near 2532. 

If we hit a bottom this week, and it reverses, then what?  Good question.  The number of weeks between tests of the bull run channel low have ranged from 4 weeks in 2016 to 13-14 weeks in prior bull runs.  Keep in mind that every time is different, though, so this is a guide, not a guaranteed outcome. What’s behind this one, in addition to rising rates, is the simultaneous $75b of US treasuries being sold by the US Treasury (UST) per month to finance our budget deficit, as well as $50b of US treasuries being sold by the Federal Reserve Bank (FRB) as part of their balance sheet reduction, aka Quantitative Tightening (QT), as well as who knows what else they are selling.  This last part is unprecedented.

$75b per month is high enough.  Usually the FRB is buying some of those.  But, now, instead of buying, the FRB is selling, bringing the total to a historically high $125b per month!

That said, let’s look at what happened in 2016. 

In 2016, it did a perfect 50% touch before going down.  In SPX, a perfect touch is within 1 point.  Then it went back down to test the low.  Then up up up and away. 

Now our current 2018 setup puts half way back (HWB) at 2784. 

Keep in mind, though, that the bottom before going up may not be in, yet.  Monday can easily put in a lower low and hit our target of 2606.  In that case, we’ll need to redraw this fib set to locate our new HWB, lower than 2784. If we our new low is 2606, then HWB becomes 2772.93. 

The point is, based on prior first touches of channel bottom, this is a buy for a HWB run up.  If we get a weekly open significantly below this channel bottom, like below 2600, then it’s game over, and you’ll look to get out, ideally for a small profit.  But risk/reward is very much in your favor down here.  You can’t get a much better swing low setup long.

Here’s the big picture 2009 bull channel:

 

Posted in Finance, Investing, Trading | Leave a comment

Big Brother Watch Update, Aug 2018

Censorship

Google helps China control citizens

Dragonfly

Copyright is the new poster child for censorship in EU

New Copyright Powers, New “Terrorist Content” Regulations: A Grim Day For Digital Rights in Europe

 

 

Posted in Uncategorized | Leave a comment

The Magic Pill – Keto Diet Documentary

The Magic Pill is a documentary on the Keto diet available on Netflix.  The Keto diet basically is about returning to how we naturally ate before industrialization of food, specifically, eating more fat and less carbs. 
 
My take on The Magic Pill is there is significant merit in the overall principles.  Yet, it could of helped its cause by being more objective in its presentation. 
 
On fruit…  There is NOTHING in any study that verifies fruit is a problem; and, by leaving it out, they contradict the “natural foods we ate for thousands of years” principles.  To be sure, they didn’t demonize fruit in this.  But, they basically excluded it from the list of things to embrace.  I’ve heard others speak of it as just another source of carbs, ignoring that it is very low on the glycemic index.  The whole theory on carbs causing diabetes is in part premised that the carbs are high on the glycemic index, flooding the body too fast with glucose, which is why countries that eat a lot of rice, high on the glycemic index, tend to have a high diabetes rates.  It also ignores the many healthy benefits of fruit, including the obvious vitamins, live enzymes you can only get from fresh fruit, and many unknown nutritional properties we are just beginning to realize helps our health.  Long live fruit!  I would say, instead of getting rid of carbs, go back to how we ate carbs for thousands of years.  Eat fruit!
 
One misnomer of theirs is how people used to be fit. That is true. But, they went with the assumption it was solely because of diet. The primary reason we were all once fit is because we didn’t work in offices and have cars!!!  Work on a farm all day without gas powered equipment or landscape, and you’ll be a lot more fit than if you work in an office.  Note that even in high labor fields machines are replacing human labor, so even farmers might be getting less exercise today if they sat on a machine all day.  Prior to modern living, each day was filled with exercise we no longer get. Today, you need a gym membership.
 
That said, I believe evidence has been mounting to support some of their dietary claims.  You just have to view it as a pendulum as people can go from one extreme to another. EVERYONE in the debate is guilty of making false assumptions. One may have more truth than the other. But you have to ask what assumptions they are purporting.
 
Saying “I’ll listen to you when you have 40,000 years of research” in the beginning was childish.” No one has 40,000 years of research.  They should of cut that from video. On top of that, they had the people GUESS what killed people before infectious diseases. No one knew! They guessed “old age”.  But, no one presented real data. 
 
For the record, infectious diseases have ALWAYS killed people.  They called that pestilence.  As long as there were cities, pestilence killed people.  We have been building cities for thousands of years.  It is natural for us, beginning with villages, and growing to markets and ports, to create cities. 
 
One thing I learned is the average person makes over 1,000 false assumptions per day. We’re wired to do this so we can make quick decisions in a world of unknowns.  Drive at night in a now blizzard on a country freeway with no lights and no traffic other than you.  If you make it home without going off the road, ask yourself how you did it.   Were you always 100% sure what the outcome of every decision would be?  Did you always know where the road was going?
 
I first learned this as a computer programmer when I was a kid, where your bugs in the first few years are primarily caused by your own incorrect assumptions. Once you realize how true it is, you’ll get better at identifying them in every discussion.  The better you get at identifying assumptions, the quicker you can get to the truth. 
 
I do strongly believe society has become way too dependent on pills. and, taking pills to solve problems caused by poor diet is ridiculous. That was one of the core things in The Magic Pill I agreed very strongly with.  The obvious conclusion is that we can impact our health a lot by changing our diet. 
 
My principle is eat what i want. But, if something is more healthy, simply try to eat more of it. If you eat more of something healthy, you’ll end up eating less of the unhealthy things. This is how you can grow without the issues of a restricted diet, where you’ll binge or fall back due to craving. If you have cravings, just try to fill them with something more healthy. Don’t get religious about absolute abstention. I identified drinking pop half the day, every day, as a health concern.  Yet, I’ll drink it on occasion, and overall replace it with something else i love in the meantime that is less harmful. 
 
That is how I approach Keto.  Give up my pasta, pizza, etc,…?  Heck no!!!  But, why not cut out carbs that I don’t really love, like junk food or processed meals, and eat more healthy fats I enjoy?  I don’t believe we can eat too much fruit and vegetables.  So, I load up on the ones I love, and prepare them in a way I enjoy, which sometimes is adds just olive oil and salt, and other times butter and cheese.  Every step we take towards better health adds up. 
 
Being happy in the process drives you to continue.   Learning what is healthy, and what isn’t, is the beginning of that process.  Find healthy things you love!  And eat more of them. 
 
Posted in Fun, Learning | Leave a comment

Testing Galera in Kubernetes

I deployed a 3 node Galera cluster in Kubernetes. Galera clusters MariaDB or MySQL, allowing you to read and write to all nodes while always being consistent (ACID) at all times. Kubernetes is a deployment environment for container applications. 

Here are key features of Galera:

– It uses MariaDB instances, and then uses a plug-in to cluster them. So, you are still using stock MariaDB instances.

– It uses InnoDB table types, which has been my default since introduced, and now the OTB default. It introduced ACID to MySQL long ago.

– Every node is a master/slave. So you can write to any node.

– Unlike typical horizontal clustering, which typically offers eventual consistency, this provides consistency across nodes at all times.

What this means is from a functional perspective, you can continue to use for your OLTP applications requiring ACID.

It’s primary benefit is when a node fails, as long as quorum is met (majority of nodes still up), the database remains available for transactions.

Enter Kubernetes (K8S), and a node failure is quickly remedied by K8S as soon as it can. I kill a node, it brings it back up within a minute or two. In the meantime, the other 2 of 3 nodes remain up, and continue to serve transactions since 2/3 is a majority. This is the primary benefit of Galera, and Kubernetes is the ideal environment for it.

While Galera doesn’t provide load balancing, K8S does, as you connect in K8S to the single service name that routes the connection to a node that is currently available.

I tested this, and added a row to a database to one of the up nodes while a node I just killed was being recreated automatically by K8S, yet still down. When the killed node was restored, it too had the new row in the table. So, new nodes “catch up” to missed transactions automatically.

I have not reviewed the performance impact; but, guaranteeing consistency across nodes 100% of the time has a performance cost when compared to a horizontal database with eventual consistency. Yet, performance is likely to be better than a single node since replication can be extremely efficient (think low level processing, without having to duplicate query processing). Your primary benefit, though, is higher availability.

Testing in Kubernetes

If you’d like to give it a whirl, here are instructions for how to test it. 

Create a cluster and deploy a 3 node Galera cluster. I had no problem deploying Galera in Google Cloud to a cluster using these 3 YAMLs

View in Kubernetes console

kubectl proxy

Access via

http://localhost:8001/ui

To use Skip and have Admin privileges, load dashboard-admin.yaml, which you can create per these instructions.

In order to test from a local db client, create a port-forward rule.  Here I use a different port because my local machine has its own instance of a MariaDB server listening on 3306.

# Listen on port 13306 locally for port 3306 of pod 'mysql-0'
 kubectl port-forward mysql-0 13306:3306

You can easily kill it and change the pod to jump around from one instance or another.  When I killed mysql-2, I inserted in mysql-0 while mysql-2 was still down.  Then when mysql-2 was back up, I changed the port forward to mysql-2 to verify it had the new row inserted while it was down.  Alternately, you can port forward to all 3 pods on 3 different ports.

To connect, use this from a local client instance where you have MariaDB or MySQL installed:

mysql -h 127.0.0.1 -P 13306 -u root -p

To test the Galera cluster you can follow these instructions.

Cleanup

In addition to deleting the test cluster, you’ll need to delete the Persistence Volumes, which you can find under Google’s Compute Engine Disks if you are using GCP.  

Posted in Data, Technology | Tagged , , , , , , | Leave a comment

Added Charting to Automated Trading System

This is a continuation of Developing an Automated Trading System


Many of us use very robust charting software, including the popular thinkorswim platform, that does more than I plan to create in my system.  The requirement I ran into that could not be met by this software is the ability to chart unique data produced by my system that isn’t available to the third-party platforms, such as back testing results. 

Thus, I needed basic charting that allowed me to analyze things in the context of price history.  While a fully automated system won’t depend on charts, of course.  I — the human — play a role both in its development and improvement, as well as a cohesive role in automation.  To balance the human brain vs AI discussion, the goal is a “cyborg” in the beginning that becomes more and more machine as time passes. Parts that are proven to be successful in production will remain in the cyborg while new parts are vigorously tested. 

I had a few requirements when comparing charting libraries:

  1. Extensible free open-source.
  2. Works with Angular2, our choice for UI.
  3. Can do price history charting well (stock charts).
  4. Can easily add lines (studies and other calculations).
  5. Can update in real-time.

Other bells and whistles were considered, but those were the core requirements.  I chose ng2-nvd3 as it met these requirements and had nice bells and whistles such as zooming and resizing capability, and can be user interactive.  This is a 3-tier stack:

D3.js – a JavaScript library for manipulating documents based on data.
NVD3re-usable charts for d3.js.
ng2-nvd3Angular2 component for nvd3.

The center of the stack is NVD3, as ng2-nvd3 just provides an Angular2 interface to it. Interfacing via ng2-nvd3 worked well.  You have complete access to NVD3 capability.  It also updates the chart when you update the data, as you expect from an Angular2 component.  So, this completely met the Angular2 requirement.  

NVD3 is a bit limited, though.  They have a gallery of charts you can view.  It can produce a nice candlestick or OHLC chart with high, low, open and close bars.  But, you cannot add lines to these, and the multiChart option does not currently support candlestick or OHLC chart types.  The multiChart type includes area, line and bar charting only.  I can live with this limitation for now.  I just have to chart close prices of the original price history as a line, and additional lines for things such as MAs.

Extensibility. In the long-run I’ll one day want a candlestick charts with lines for MAs and other indicators.  I’ll also want lines for fibs, and other types of indicators, such as buy and sell signals, which might be up and down arrows, and other types of notation related to back testing.  There are two silver linings to the ng2-nvd3 stack. 

nvd3 is open source, so it can be easily improved if one is willing to learn the code.  You can copy and edit the Javascript files your installation is using, then optionally turn your changes into a pull request if you want them to become part of the project.  I talked to the primary committer on the nvd3 project, and he’s eager to accept pull requests.  While having updates committed to primary project isn’t necessary, it is ideal so you can continue to easily upgrade in the future as well as share your love.  

On top of this, you can use d3 on your current charts.  I’ve already used it for some non-graphical utilities.  Your code has access to everything ng2-nvd3 and nvd3 has access to, including, of course, the DOM model generated by it.  So, you can easily learn and use D3 yourself to enhance your charts, perhaps to add the buy/sell signals, without even changing the nvd3 code.

Developing with D3 and extending nvd3 involves a learning curve.  While I’m heavily immersed with Typescript in Angular2 — and loving it — this does force you back into old Javascript, as d3 and nvd3 are both written in Javascript, not Typescript.  These are by no means show stoppers.  However, it does impact prioritization of time.  For this reason, I’ve limited myself for now to what I can do out-of-the-box as it permits me to get back to the original reason I decided to add charting next — the ability to view back testing results and signals I create.  

User Interface

The UI consists of 3 Angular2 components.  One child for the price history query parameters.  Another child for adding studies.  And the parent that bring those inputs together and outputs the chart. 

This uses both the Angular2 @Input and @Output decorators that allow you to tie components together.   Because the chart automatically updates when the data changes due to data binding, including chart configuration, you can continue to add to and modify a chart after creating it using the controls. 

Angular2 Charting components

Because each child component requires the user to potentially update multiple fields before the chart can be updated correctly, each one has at least one button (Chart and Add).  When a button is pressed, the parent component receives the output and updates the chart.  Note that the StudyEntryComponent is in early stages of a WIP.  Yet, it can currently be used to add MAs to a chart. 

Charting Input Components

As you make modifications, clicking the Chart or Add buttons updates the chart.  You can also edit current MAs by selecting it, changing it and then clicking Chart.  The next image shows the table that is created as you add or edit MAs along with the resulting chart. 

Charting Output – Comparison with MAs

This chart demonstrates several features using nothing but out-of-the-box nvd3. 

If you resize the browser window, the chart automatically resizes.   While you can’t view the effect in the static image above, trust me, it works.  Have doubts? Check out the demos I linked to earlier.   

You can compare items using two different Y axis.  In this case, the Russell 2000 ($RUT.X) is on the right axis.  This currently creates studies for the underlying asset on the chart.  So, when we add an MA, it appears for both the S&P 500 ($SPX.X) and the Russell.  Being a two dimensional chart, you cannot have more then two Y axis.  If you included a third or more, they will share the right axis, which will be extended to handle the full range of possible values.  The choice of which axis an item belongs it is something you can control as you setup the data.  But, you cannot have a third Y axis.  So, you have to factor this into the design and how raw data is handled, with the impact on the Y range being your primary concern.  Combining an item that ranges from 0 to 2 with an item that ranges from 2000 to 2200 on one Y axis will result in two flat looking lines far apart.   

The user can interactively hide/show any of the lines by clicking the legend.  You can see above that $RUT.X 200 EMA-we and $RUT.X 50 SMA-mo are both hidden because their circles in the legend are not filled in.   

Another feature that differs from some charting software is that interval of the MAs is not limited to the interval of the chart.  While the chart is displaying weekly bars here, we added monthly MAs to the chart.  This is important because the algos will typically use one minute bars for historical data, and one or more per second real-time quote updates; yet, needs to be able to calculate MAs with intervals from 5 minutes to monthly. 

Round Trip Data Flow

Currently, when it needs to update the chart, it simply does a REST call for price history, which has the ability to add studies via parameters.  When those results come back, our UI side transforms the data using Typescript into the representation required to chart it, and simply replaces the data field in the ChartNVD3PriceComponent given to nvd3 to create the chart.  Due to data binding, the chart updates the instant this data is updated.

The REST call itself uses the parameters to construct and invoke a third-party API call.  Our facade to the API converts the raw data returned to POJOs.  Because our interface to the API uses caching, this could be in memory and returned instantly.  With price history in POJOs, our service then adds studies to the data as new fields.  Then, it converts the POJOs to JSON and returns it as the output of the REST.

Our Angular2 component receives this data, transforms into charting representation, and updates the chart data.  

Looking Forward

Adding charting to the application gets us started so we can begin to create JSON of back testing results that can be used to produce charts.  To add back testing results to charts, in Angular2, we’ll be creating a new UI component for defining back testing requirements, much like the one we created to add studies. 

The exception to simply using a one trip REST query might be if the back testing takes longer than it does today due to new complexity and permutations.  In that case, I’m likely to redesign it to simply add it to a back testing request queue; and allow the user to monitor the queue and view when available.  One advantage of this is that it can be viewed at any time later so long as it is on the list of queries that were previously queued.

WebSockets can be used to update the queue in the browser without the user having to click.  You will be able to see, in real-time, the progress of your request. 

WebSockets can be used to update the chart in real-time.  This will be important when  using real-time quotes and monitoring trading.  With the exception of the data coming through WebSockets instead of REST, we won’t need to really change how charting works in Angular2, as it currently updates the chart whenever the data changes.  The only difference will be how the data changes. Since we already use Angular2 for real-time updates of Level I and II quotes, monitoring of predictions, and order flow, using WebSockets to update a chart does not introduces a new technical feat. 

 

Posted in Finance, Investing, Technology, Trading, Web | Tagged , , , , | Leave a comment

Created Backtesting of Signals and Algos

This is a continuation of Developing an Automated Trading System


Began algorithms with simple strategies.  This tests a range of inputs for a strategy. For example, you can test a range of trailing stops from 1 to 15% with 0.5% steps. This will test 30 scenarios with the same data.  

You can combine strategies testing multiple ranges.  If your ranges include 10 target scenarios, and 10 stop scenarios, it will test 100 scenarios, as it will test every combination of your ranges.  There is no limit to the number of ranges you can combine. The REST call to create the backtest parses your strategies, creates entry/exit factories and iterates through the ranges.  

On the entry side, I’m creating indicators that can be used to fire signals.  While the signals are simple today (all true, all false), the logic can become complex as algos become aggregations of signals weighted to make a decision.  This will be fed to machine learning and use other techniques for prediction and optimization. 

Technical description: No new technology here.  This introduces a pattern of phased data enhancement.  

jvest-backtest-data-flowI was recently inspired by the AI series Westworld.  This led me to increase generification and conceptual streaming and phased data enhancement as I imagined the result being a high performance real-time analytics engine that could potentially handle complex decisions beyond the current application.  The goal here is to ultimately build an AI engine with practical purpose driving it rather than theory, as well as a real-time analytics engine that can be deployed to solve a number of problems in various industries. 

For this reason, the back testing algos are designed to support real-time price updates that include time so they can handle their own temporal requirements, much like the human brain continuously analyzing real-time signals to help you make decisions.


Continued posts on Developing an Automated Trading System

Added Charting to Automated Trading System (Jan 18, 2017)

Posted in Business, Finance, Investing, Technology, Trading, Web | Leave a comment

Developing an Automated Trading System

It has been my dream since I was a kid and I saw the Matthew Broderick movie War Games in a theater in 1983 to build an automated trading system using AI.   This year I have begun to live that dream.  I call this project jVest.

Curious how I built it?  Here is a description of what I created:

jVest is a real-time high transaction volume market analysis and trading system implementing machine learning for predictive analytics. It has an HTML 5 UI using WebSockets to continuously stream real-time information to the browser.  The UI itself is largely built using Angular 2 with Typescript. It uses JMS to distribute the incoming third-party data messages after converting them to POJOs to subscribers.  Using a listener, it persists select data in a RDBMS using JPA.  Using another listener, it is streamed to business logic which then streams its transformed output to the web UI via WebSockets.  It also uses the Observer pattern to flow the data through many consumers and producers, and injection (CDI) to simplify the complex interactions and provide a highly extensible design.

It uses REST for user interactions with the server and WebSockets for real-time streaming. To handle conversion of POJOs to/from XML, it leverages JAXB to marshal and leverages the marshalling capabilities of Resteasy in the REST layer.  All transportable POJOs  implement a Marshallable interface providing common default methods to do JAXB mashalling to/from XML and JSON, which is used for WebSockets as well as conversion of XML to POJOs from upstream data providers.

For machine learning, it uses JServe to integrate with R over TCP/IP, which permits high scalability through distributed R nodes.  R then handles the predictive algorithms such as linear regression and KNN. Users can query via the web UI to analyze market data, as well as create monitors that regularly update the predictions in the web UI via WebSockets.

As you can see, it is not only a fun project, but has brushed up skills by using the latest updates to the Java EE stack. On top of that, I learned completely new things, such as Angular 2 with Typescript which is becoming a very popular UI solution that encourages creating highly re-usable UI components; and R, one of the leading languages for doing machine learnings with a large plethora of readily available statistical analysis packages.

Using R to do machine learning is not only cutting edge in terms of technology, it is a rapidly growing area due to the proliferation of large quantities of data in computers, faster computers, and cheaper storage.

Limits to computer speed and storage were the reasons I set aside pursuing AI in the 80s after my initial dabbling with it.  It is exciting to be able to pick up that dream today now that hardware can now do things barely imaginable in the 80s.

Oct 28, 2016 – Added clustering via WebSockets

I added the ability to cluster servers using WebSockets.  This overcomes a limitation. The upstream data provider permits only one live real-time stream per account, though all the nodes can use the upstream provider’s synchronous API (think “REST”).  To overcome the stream limitation, other nodes can now connect via WebSockets to receive the real-time streaming data.  Because each instance of the application can act as both a provider and consumer, this allows for theoretically unlimited scaling of the business process and web/UI tier through a hierarchical topology.  Because all streaming services are initiated through REST or scheduling, this is 100% dynamically configurable at run-time, both from a user-driven perspective and, down the road, perhaps for automated discovery, load balancing and high availability.

Technical description: this uses WebSockets for Java-to-Java in an EE web container.  The provider endpoint picks up messages to send using @Observes, and the client side fires the messages it receives just as it would if they were received in its MDB if it was running the data collector that handles real-time streaming with the upstream provider. This demonstrates the powerful plug-ability and extensibility of the Observer pattern. It uses a WebSockets encoder and decoder to convert all messages to/from JSON for serialization. The en/decoders were easy to create since all message POJOs entering and exiting the endpoints internally support JAXB bi-directional marshalling of both XML and JSON accessible through a common interface. 


Continued posts on Developing an Automated Trading System

Created Backtesting of Signals and Algos (Nov 30, 2016)
Added Charting to Automated Trading System (Jan 18, 2017)

Posted in Business, Finance, Investing, Technology, Trading, Web | Tagged , , , , , , | Leave a comment

Internet Relay Chat (IRC) and the Berlin Wall Falling

msp430_ircl_channel_43ohThis is the story of using the IRC when it first came out, long before most people heard of the Internet.

Connecting Through College

In the late 80s, when I was in college, I created an account on the school’s computers.  I dialed in with my modem from home.  When you dialed in, you were presented with a UNIX prompt. This was an all text world.  No images.  No nice web pages.  Just a command prompt and programs that output text.

In the 80s, all the colleges in the US were connected to the Internet.  There was no commercial dial-up service like AOL, yet.  So, it was virtually all academics and scientists.  There were no corporations.  No one charged for anything.  No one competed.  It was just people connecting to people and information.

One of the first commands I learned was ‘irc’.  Once you use this command to connect to an Internet Relay Chat (IRC) server, you type /help, and from there learn other commands.  I quickly discovered thousands of channels with thousands of people from all over the world.

The purpose was to simply let people chat.  It is a myth that the Internet became social in the late 90s.  The Internet was social from the beginning, particularly with IRC.

What was surprising was that the Internet included people and servers all over the world that spoke many languages, although English did seemed to be the predominant language.

Want to talk to people in Spain?  Join the #spain channel.  Germany?  #germany.  When virtually no one heard of Linux yet, there was always the #linux channel.  Want to create your own channel, just “/join #mychannel” and boom, you just created a new channel.  It was a level playing field in that anyone could create a channel and invite people to participate in it.  And, you could join any open channel.  Though, there were ways to make channels hidden, and require passwords to enter.  There was never a fee, and most were openly there for anyone to join.  The only requirement for using IRC was an Internet connection and a client program like ‘irc’ to connect.

16384419062_7ce01ffda5_c

Free Communication Without Geographic Boundaries

This was an era when international long distance was prohibitively expensive.  Prior to IRC, you’d never dream of talking to people all over the globe.  So, imagine how exciting it was when, in 1990, one year after the Berlin wall fell, I was talking in IRC to someone who grew up in East Berlin!  I asked questions like, what was it like when the wall came down?  What was life like growing up behind the iron curtain?  How are you doing now that 1 year has passed and you’re now integrated with West Germany?

I’m not sure I could of called someone in East Germany, yet, since it was behind the iron curtain just a year earlier when it was unimaginable that you could call people there from the US.  I’m pretty sure you couldn’t do it prior to the wall coming down.  And, even if you could, it would of been very expensive if the person you wanted to call happened to have a phone.  My brother went to Moscow University in 1993 under the Perestroka program.  It cost us $30/minute to call him.  I tried to get him to IRC, of course.  But, that never panned out.

IRC Today

To be sure, it hasn’t changed much today.  It’s bigger, of course.  There are more servers.  There are lots and lots of bots (automated programs) on the IRC.  There are still a lot of people across the globe using it.

However, in an age when most people know the Internet via the face of Google and Facebook, the IRC can seem a bit antiquated.  Yet, for open live text chatting, there’s still really nothing that has truly replaced it.  Yes, you can IM and do other forms of text chat.  But, having a room open 24/7 that anyone can go to and just text chat?  As far as I know, someone has to open a Google hangout and invite people.  There’s no list of thousands of Google hangouts you can join, particularly without knowing anyone in the channels.  What if you want to join a real-time live discussion of a topic you’re interested in?

Client programs for connecting to IRC improved a lot, especially in the 90s, giving a graphical easy to use interface for people.  The good news is these programs are free and have only improved over time.  Whether you are on Windows, Linux, Mac, Android or iPhone, there are great easy to use client programs for connecting to IRC.  Don’t want to download and install a program?  You can now just use your web browser to connect to IRC.

 

 

 

Posted in Technology | Tagged , , , | Leave a comment

July 2016 Jobs Report Reveals US Economic Weakness

The dollar tanked and the the S&P 500 made a new all-time high when the headline news of the jobs report came out.  The probability an interest rate hike for December increased from 32.1% to 45.4% (calculated using 30-day Fed Fund futures prices).

Our trusted media reported headline news such as

U.S. Posts Another Strong Month of Job Gains

However, the news, which many investors and traders trust to make their decisions, has failed to look into the data in the report to understand what it really says.

The Devil in The Jobs Data

ZeroHedge pointed out that Obamacare offset weak industrial and consumer sectors.  In another article, they point out that private payrolls grew an unadjusted +85k in July, far less than the seasonally adjusted headline number of +217k.

Reviewing the labor report myself, I discovered that the only education category for those 25 and older with an increase in actual jobs from June to July was High school grads with no college.  The other 3 categories, including those with some college with or without any degree had a decrease in actual number employed.   (Table A-4)

The number of unemployed from permanent job loss (layoffs) increased from June to July from 1.848 to 2.014 million (+166k). Even the “seasonally adjusted” number, a fictitious number which is of course rosier, showed a 104k increase in permanent job loss.

Of course, with increasing layoffs come longer unemployment times, steadily increasing since May.  Average number of weeks unemployed went up in July to 28.1 from 27.1 in June.

May 26.7
June  27.1
July  28.1

Weakening US Economy

All this data points to a weakening US economy.  Educated workers are losing their jobs, being increasingly laid off.  Those on unemployment are having a harder time finding a job.  The increases that the headline refers to are high school grads taking jobs that do not add much to our economic strength, as they do not replace the high paying jobs being lost.  Many, of course, are temporary jobs due to the election season, which helps to explain the increase in high school grad jobs.

Clearly, as long as we trust a news media to do our analysis for us, and do not hold them accountable to critically review the jobs report, we’ll continue to be deluded by rosy headlines despite the truth being much less bright for the US Economy.

 

 

 

Posted in Finance, Investing, Trading | Tagged , , , , , | Leave a comment

Rosy labor report?

The jobs report this morning caused the S&P 500 to hit new highs of the year, a few points short of its all time high set in May 2015.  It is tempting to pretend like all is well, and just buy stocks, and hope for the best.  Yet, perhaps the best way to protect your nest egg is to take a closer look with a critical eye.

I heard a few unconfirmed things today from traders regarding that report:

  • June was revised down from 38k jobs to 11k jobs
  • A large portion of the new jobs were people 55+

Note that gold and bonds soared today (my two favorite investments of the year).  #1 on the selling into strength list for most of today was SPY (S&P 500 ETF), with the IWD (Russel 2000) at #4.  This is post-brexit profit taking which is common when they believe the market is reaching another top.

ZeroHedge has an interesting critique of the jobs report that soared the markets today:

The Bearish David Rosenberg Reemerges: “What If I Told You Employment Actually Declined 119,000 In June”

Selling into strength:

sellingStrength-2016-07-08

Posted in Finance, Investing, Trading | Leave a comment