Fixed is exactly what it sounds like, you pay a fixed rate of 0.005/share with a $1 minimum (plus a few pennies for FINRA fees and whatnot). This means you can trade up to 200 shares for $1. IB will keep the rebates for themselves if there are any.
Tiered is 0.0035/share (depending on your volume) with a 0.35 minimum but you pay ECN fees and keep rebates if there are any. You also have the option to direct route.
So which is better? Anything over 200 shares it is better to use fixed unless you always add liquidity and can take advantage of ECN rebates. However, 100 shares and under it’s better to use tiered. We’re only talking about a few cents difference per trade and obviously if you’re that worried about a few cents on commissions your strategy probably isn’t valid.
In my case, with testing algorithms, I’ve always been on tiered with their paper trading account. Fees end up being between 0.20 and 0.60 which is essentially free. My algorithm has a max loss of $10 and it sizes in according to the stop price. So every trade it takes is going to be 100 shares or less for the most part.
When I go live I will most likely switch to tiered so even if I get stopped out for a tiny $1 win I won’t lose because of commissions. This should make testing less stressful.
I am not a registered financial adviser/broker/anything. Use this information for entertainment/informational purposes only. Any tickers mentioned are not recommendations to buy/sell/or sell short. They are used as examples only.
I watched one of LinusTechTips recent videos where he demoed game streaming from a service I hadn’t heard of called Shadow (coupon: RYA1BMUE). I was intrigued by the fact that you get a full Windows 10 VM with 8 cores, 12 GB RAM, and a GTX 1080 or Quadro P5000 to install whatever you want onto (besides mining) with very low latency.
It was only $15 after the coupon code (use mine if you decide to try: RYA1BMUE) so I said why not. So, after the initial setup there was massive latency of 1 second or more even though my ping is 7ms to the date center in New York. This was solved by rebooting the VM and it’s been fine ever since. However, there is still some latency so for multiplayer FPS games like CS:GO or Overwatch I would not recommend it. Anything singleplayer or co-op like Fallout 4 or Left 4 Dead it would be fine.
They must be overprovisioning their service so this is essentially how it works (my best guess):
They expect most people to not be gaming while their service is active so they can overprovision and have more paying customers. This is actually a good thing because it brings the cost down for everyone substantially.
Your individual VM storage lives on another server in the DC.
When you open the Shadow software and launch your Shadow, it is spun up on a random server with your block storage attached.
When you close the Shadow software, your VM is turned off after several minutes so someone else has the chance to use the hardware.
That’s a brief overview of my thoughts on it. Now, about laptop gaming.
There’s this little utility called Volta that can be used to undervolt and limit the maximum wattage of your CPU on macOS. On Windows you can use Intel XTU. I don’t think AMD has a way to do this yet.
Basically, what I did was limit the maximum wattage of my 2013 Macbook Pro 13” to 5w and ran Shadow for game streaming. My Macbook got a little warm but never hot (fan was off the whole time) as I played Left 4 Dead on a bed comforter.
Activity Monitor said I had about 6 hours of battery life while Shadow was running which is pretty good considering it was hogging about 40% of the CPU. I have a Ubiquiti AP so even playing over WiFi there wasn’t much of a difference between wired.
It’s fairly impressive but since I have a PC better than the VM, it would make more sense to stream the game locally for even less latency. I haven’t tested Steam in-home streaming with the capped power limit so that may work. Last time I tried it the quality wasn’t very good and it killed the battery quickly.
I was setting up a Debian server with all SSDs and I kept getting a strange error that caused mdadm to not be able to flush data which made the entire system lock up.
I decided to check the SMART log and I saw a strange error. All of the SMART attributes are good with no reallocated sectors. I’m using all Intel SSDs and I’ve never had a problem with them until this one.
The error does not appear to be the drive itself as I tested another Intel 730 SSD and even moved a known good drive to the port and it still ocurred. It’s actually a problem with the cable. Most likely it was just loose and not making perfect contact with the SATA port on the motherboard. I moved the cables around and I have yet to see it happen again.
One strange thing is the other 730 drive I tested already had a bunch of these errors. I do not recall having issues with it so maybe it was from unplugging it while it was on?
As you can see I also ran an extended SMART test and it passed just fine.
In the beta release of the TWS API there are now custom scan filters that you can apply. This functionality mirrors what is available in TWS so now it can all be done through the API. This should make filtering out junk way easier.
To get this you need to apply for the private GitHub repository by signing a form. Eventually this should make its way into the “latest” API version.
I redid some of the tests with an mdadm RAID 10 array of 240 GB SSDs and the results were nearly identical. It’s possible the virtio drivers or Windows version I used was causing strange storage performance.
I re-ran none/threads under a Debian 9 host / Server 2008 R2 and the sequential reads/writes are great. 1455 MB/s / 548 MB/s sequentials. Everything else looks good too. Very strange.
I don’t intend to use ZFS for VMs at the moment due to lack of encryption support and some annoyances like Ubuntu not booting half the time because ZFS datasets won’t mount correctly.
4 cores / 4 threads
8 GB RAM
120 GB zvol / qcow2, lz4, relatime
No page file, optimize for programs, DEP off
Windows Server 2016
I’ll be using CrystalDiskMark (CDM) for benchmarking under Windows.
none / threads
Memory usage spiked up to 30.9 GB and crashed the host while doing the write test.
writeback / threads
Remote desktop died but the test continued. Memory usage kept hitting 17 GB from 12.8 GB.
writethrough / threads
I can immediately tell that this is faster than writeback. The VM started quicker and loaded the server config tool faster. However, the write speeds are absolutely awful. Memory usage was a consistent 12.8 GB during the entire test.
This seems close to what an individual drive is capable of except the sequential writes should be a lot higher. Strange.
directsync / threads
Appears to be really fast just like writethrough. I remember testing this option before with mdadm RAID 10 under a Linux VM and it was extremely slow. With ZFS it appears to be different. Consistent 12.8 GB memory usage as well.
Overall it looks like the performance is slightly worse than writethrough.
unsafe / threads
This is supposedly similar to writeback. However, the OS and programs loaded faster than writeback. Memory usage peaked around 14 GB which is lower than writeback.
This is the option I use on a production server. I had severe performance issues while doing lots of random I/O in a Linux VM with “threads”.
Memory usage peaked at 30 GB. It slowly came down as it was flushed to disk. Sequential write speeds are that of a single SATA 3 hard drive. Even though it is higher than ‘writethrough’ from above, the test took longer to complete.
Reads are half compared to the zvol with writeback. Writes are a little better than a standard hard drive. Memory usage dropped when the write test started. ZFS freeing the cache?
unsafe / threads
Reads are slower than writeback but sequential writes are slightly improved. Windows appears to be snappier using unsafe no matter if it’s a zvol or qcow2 image.
The tests above were conducted using the default cfq scheduler. I will only be testing zvols for this part.
This is usually the best scheduler for SSDs and not spinning disks.
none / native
Ran out of memory. native with no cache does not appear to be safe with limited memory.
writethrough / threads
Sequential reads are a lot higher because the requests are not being re-ordered. Writes do suffer.
unsafe / threads
writethrough / threads
Marginally better than noop. In theory this scheduler should be better suited towards VMs than CFQ when lots of requests are being made.
unsafe / threads
No real difference between noop and deadline for this.
I’m not sure if I got it working correctly. After I enabled the kernel option all of the other schedulers were removed and the VMs were very slow. I’ve used it on Arch before and never noticed anything weird.
unsafe is clearly the fastest and best option if you want speed without caring about data integrity. writeback is fast but it uses more memory.
qcow2 is slower than using a zvol but has some benefits like thin provisioning and being able to specify an exact file name. With zvols they sometimes change on reboot.
writethrough is surprisingly snappy in Windows despite the write tests showing poor performance.
native offers good performance according to CDM however memory usage will peak and slow the performance. This option causes Windows and programs to load slower. Simple benchmarks are not telling the whole story. It may cause the host to crash which means I would not recommend using it.
No matter what option you use, read performance is good with zfs + KVM. It’s always the writes that are the problem.
TWS automatically logs out after about 11:45 PM. They recently added an automatic restart feature to the beta release but I have yet to get it working.
Since I am just paper trading to test algorithms I found an AutoHotKey script that will start/reopen TWS and login when the process disappears. I decided to write my own using Powershell so it works on all Windows systems without needing to install something.