Blog
Page: 0 ... 5 ... 10 ... 15 17 18 19 20 21 22 23 24 25 ... 30 ... 35 ... 40 ... 45 ... 50 ... 55 ... 60 ... 65 ... 70 ... 75 ... 80 ... 85
Bug DB
Date: 5/1/2010
At least no one can accuse me of not staying on top of the Scribe v2.00 bugs. There is only one major one left to deal with. Although I know of several that I haven't bothered to put in the Bug DB, the sort of stuff that really needs valgrind to sort out. Hence I've been poking around in the Mac build.
(0) Comments | Add Comment

I'm done with anything nVidia
Date: 3/1/2010
So far I've had a pretty messy run with nVidia stuff. First there was that awful awful experience I had with a nForce3 mobo, 6 months of my life poured in to getting that piece of crap running stable. I gave up in the end and ditched it. Then I bought an nForce4 mobo and a XFX 7600GS gfx card, which has been my core PC system for a couple of years now and it's been very stable as far as PC's go. It gives my Macbook a run for it's money in some respects.

The XFX card hardware is brilliant, never had an issue. But I'm stuck on some legacy nVidia driver for it because they deleted a piece of functionality that I really do NEED everyday. That is video mirroring to the TV out, i.e. play a video on the main screen in a window and get the whole video fullscreen on the TV. And they removed it completely from the driver UI some years ago, so I just stopped updating the driver after finding "the one true driver" (for me at least). I was pretty dark with nVidia over just that.

However I recently upgraded to sp3 and the ethernet port now only runs at 100mbit. Which I cannot abide with. So my mobo has also lost it's stereo out port (just flakey h/w), the video card is stuck in driver limbo and now the ethernet is crippled. Apparently it's some nVidia driver issue.

So yeah me and nVidia are done. I'm buying an Intel gigabit nic to go with my sound blaster, to replace the functionality that my mobo SHOULD'VE had all along. It's a stop gap till I spec up a hackintosh with some Core2 muscle.
(0) Comments | Add Comment

Aspell
Date: 22/12/2009
A week or 2 ago I added inline spell checking using the aspell plugin to Scribe. For the first few days it was going so well that I was about to package up a release a upload it. But then it started crashing, so I thought maybe it's not thread safe or something and moved the aspell code into a single thread where it would talk to the rest of the system via a threadsafe api. But the crashes continue and I'm no closer to fixing it, so I might have to release sans spell check.

I thought about valgrinding it to see if that would shed light on the problem, but a) the linux build sucks since I converted it to xcb/cairo, mainly because the lack of documentation and time to work on it, and b) the mac build is crashy too because I don't understand the HIView hierarchy and memory ownership rules properly (and MacOSX API doco is shit) so I had a crack at using wine and valgrind together to debug the windows build and the wine/valgrind PDB reading code barfs on Scribe's PDB's.

*sigh*

And to top of it off Aspell is compiled with gcc so there is no way to get a debug version compatible with MSVC6 such that I can debug the problem directly. And no you can't compile Aspell with MSVC because it uses various C/C++ language features that are specific to gcc (I tried).

So I guess I'm looking for a good spell checker API that I can use in Scribe. I wonder what Firefox uses?

Update: Solved! I had some other issues cause the Aspell thread to be constantly created and destroyed instead of being reused. Fixed that and now it doesn't crash almost immediately. I doubted it was an Aspell issue, but I couldn't be sure. By creating a working test harness for the Aspell code I could confirm that it worked fine in isolation and therefor the problem was in the way Scribe was calling the Aspell thread/plugin stuff. Sure enough it was.
(1) Comment | Add Comment

Scribe Mail3/Sqlite Folders
Date: 17/11/2009
I don't know about you, but the shine of a different back end to store mail in for Scribe v2 has well and truely worn off for me. By that I mean I expect performance to be considerably better than it is. I can even live with the bad write/commit speed, considering that locks need to be made and there are certain overheads to having atomic changes made to the database. However that said I was expecting positively snappy read/query times and well? Yeah it's not all that. Maybe it's the fact I'm using Sqlite rather than a big iron style SQL server. I regularly see SELECT statements take in order of 15-20 seconds to pull up a simple list of email in a folder, esp when the App is starting. I've added an index of the FolderId field so that in theory doing something like:
select * from Mail where FolderId=[inbox_id]
should be a no brainer super fast query right? Well it seems that it's not, and it takes an awfully long time sometimes.

It's not the end of the world. Now that I have an architecture that is flexible I can experiment with a range of back end storage solutions till I hit on something that meets my fast/powerful/portable/small requirements.

One idea that I've got in the back of my head is generalizing the SQL backend to handle a number of different databases. For install, in the "portable" mode you'll probably be storing reasonably small amounts of mail and the existing Sqlite is most likely enough to do the job, but for desktop installs something with more grunt (and more memory / install footprint) could be used instead, trading portability for speed. And with a decent replication tool you wouldn't be tied to that back end forever, you could migrate easily to the next big thing, or just your DB to a different portable format.

Or... and heres a big idea... I make the back end API open and people can plug their own storage in. Of course thats only open to programmers but if someone can produce a good back end implementation I would seriously consider making it available in the default install. The basic API is very controlled and there is already some documentation in the headers. And the existing back ends could be used as reference, as in you'd get source to them as well. If anyone wants a serious crack at that then email me.
(0) Comments | Add Comment

Scribe Import/Export
Date: 12/11/2009
I rewrote the mail encoder API to handle arbitrary streams of bytes for quoted-printable, plain-text and base64 data. This is now used by the mail export code that directly interfaces with the store3 api that each of the mail store back ends use. This means that you get a fairly exact rfc822 message out of the export process. However these changes were initially buggy, so I'm in the process of verifying the changes with actual email. The way I'm doing this is for a given folder, export all the email to a rfc822 message on disk, immediately re-import it to a new object and then compare the mail objects in memory. This has already highlighted a few bugs in the code. Ideally I should be able to do this with every email in my mail store without any comparison errors. So far I've got it working for about the first 10 messages in my inbox :) To help with the debugging I wrote a small app that loads a mime message into a tree control and lets me look at the formatting of the mime segments.

Then I have to verify that this works with the 3 different back ends. Then I have to check that I can copy mail and folders between the 3 different back ends... then... then...

If you wonder why I go silent for weeks on end? It's because I'm working on the code.

The other thing I'm working on are the strange little bugs that happen in the IMAP back end, like when you run a filter that moves an IMAP email, and then sometimes crashes. Basically a IMAP "move" equates to a "copy" and a "delete". So when the delete event propagates back to the UI sometime after the move it nukes the object while other parts of the app still have a pointer to it. Unsurprisingly this is bad for stability. And I still haven't worked out the best way to deal with this.
(2) Comments | Add Comment

Mpeg2 Parser
Date: 19/10/2009
Last night I got the latest refactor of my mpeg2 parser working on non-trivial amounts of data. I'm working on that with a view towards replacing PVAstrumento in my private stock video re-encoding app that compresses a lot of DTV files down to H264/AAC for semi long term storage. The issue with most demuxers is that they are not good at recovering from errors or keep good sync between the video and audio. PVAStrumento is the best demuxer for damaged and chopped up mpeg2 as it has a raft of error recovery logic. However it's painful to use and is not perfect either. Sometimes there are files that even it can't handle. So for most of this year I've been working on and off on a replacement parser.

As of yesterday I have a basic parser working that allows access to the raw streams. No error correcting yet, but it's a great foundation. The trick is mostly in chopping up the audio packets to match the the valid video frames using the PTS (timestamp) values in the program stream packets. At this point I have access to all the PS packets, their PTS values and all the frames of video and audio. So it's just a matter of working out which to keep and which to chuck. And I can use PVAStrumento as a reference as to what I should be doing (it works great 98% of the time) and then move on to the "problem" files and see where PVAStrumento is falling over and make my parser work for those edge cases.

Well thats the theory anyway.

Strangely I was starting with a 2MB memory buffer for the top level program stream. I was getting 15mb/s parse speed which is considerably lower than the HD's maximum transfer rate and I confirmed that the algorithm was CPU bound by checking the CPU usage.. 1 core was maxed. So I experiemented with different buffer sizes, 4mb? speed dropped to 13mb/s, 1mb? Speed up to 20mb/s, hmmmm thats odd. 512kb? 40mb/s wow... thats nice... 256kb? 54mb/s... I ended up settling on 32kb, which tops out about 59mb/s. I think thats largely because the more buffer I keep track of the most PS packets I have to keep track of and I think somewhere I have a non-optimal algorithm working on the list of packets. Anyway, I think it's most HD bound now, which is how is should be.
(0) Comments | Add Comment