I had a stock Wordpress website but I didn’t really need the power or cost of the hosted LAMP stack behind it. So I gave myself a lockdown project of rebuilding it. I also wanted to reduce the complexity, make the site faster and less complex.
I’ve also witnessed how the web is changing to become less friendly and bulkier with the rise in tracking technologies. Because of this an additional goal was to make the site with no javascript or cookie banners if possible. Just good old HTML and CSS.
I had a play with some static site generators and settled on . It looked easy to use and had template support so I could easily change the default themes (although I never bothered doing that with Wordpress lol).
It’s been an iterative process and the more I learnt the more I wanted to change things.I would keep trying new ideas, often choosing to abandon them altogether and keep it simple. 😂
I also decided to build on my experience with a self hosted GitLab. I’d succesfully set it up in a previous job to manage code changes from different suppliers but never got as far as trying the built in pipelines and CI processes. Now I can push any site changes to a repository and have them be deployed with a pipeline to different locations depending on the branch I push to. The pipeline automatically reduces and compresses any pictures, adds watermarks to my own pictures, and creates reduced size images for a mobile version of the site. Nice 😎
There will be changes as there’s a lot I want to improve and layout bugs to fix. I’m also exploring options for allowing commenting and checking out GoAccess for site analytics as it uses no JavaScript and captures the info I’m interested in. Take that Google 💪
Hopefully I’ll find more time to write as I can focus on content rather than updating things.
LOL
Enjoy!
]]>This article and the code examples within were written in 2014 for SQL Server 2008 R2. Please test the code to make it works as expected if using a newer version of SQL Server
I work with Microsoft SQL Server everyday and have been developing scripts to monitor such things as database growth and the amount of free HD space on each of the systems.
The code I have records HD space stores the DBID of each database next to each recording so I can track growth over time. “Sounds fine so far” you say. So did I…
When developing some code to report on the data collected I noticed that there were several databases where the sizes had changed after a period of having zero size.
Further investigation showed that MSSQL had reused IDs that belonged to databases I’d previously deleted. Doh! I needed a better way of differentiating between the databases I was monitoring.
I looked at what information MSSQL stores for each database on a system
SELECT * FROM master..sysdatabases
This gave me a list of databases on the system along with some information about them such as database name, the date it was created, it’s ID, the path of the MDF file, status etc. I wanted to find something unique from the list that wouldn’t change.
The DBID doesn’t change, and neither does the name on any of my systems but they could be reused if the DB was deleted. But how could I make sure that a database called ‘Geoff’ with a DBID of 7 is the same db as the one I checked earlier and not an imposter! If I also check the ‘crdate’ column I get the date the database was created. Between all three sets of data I can create a unique identifier for each database that would change if the database was deleted and recreated.
SELECT (name+CAST(dbid as varchar(3))+CAST(crdate as varchar(20)))
FROM master..sysdatabases
This is unique but not very useful to work with due to some very long or awkward strings generated. When scripting on Linux I can use an MD5 hash as a simple way to see if a file has changed. Getting the MD5 hash of this string sounds much better than using the string itself. For this I used the HashBytes function.
SELECT CONVERT(NVARCHAR(32),HashBytes('MD5', name+CAST(dbid as varchar(3))+CAST(crdate as varchar(20))),2) as MD5Hash
FROM master..sysdatabases
WHERE name = 'Geoff'
A45756E1D7B9E615FCD7EEEDB0CC518D
This now means I can use the hash as a unique identifier and be confident it will be different even if the database name and id are somehow, by coincidence, the same.
I created a couple of functions that enable you to get the hash of a database by either it’s ID or name. I can then store this elsewhere and use the hash in future logging. Hopefully they’ll be of use to somebody:
USE [KPHOnline]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: kphonline.co.uk
-- Create date: 02/06/2014
-- Description: Returns a unique id for a DB since DBID isn't reliable enough. Generates an MD5 hash of the Concatanated DB Name, ID and Created Date
-- =============================================
ALTER FUNCTION [dbo].[fn_DBA_ReturnDBHashfromDBID]
(
@pDBID int
)
RETURNS VARCHAR(32)
AS
BEGIN
DECLARE @vHash VARCHAR(32)
SELECT @vHash = CONVERT(NVARCHAR(32),HashBytes('MD5', name+CAST(dbid as varchar(3))+CAST(crdate as varchar(20))),2)
FROM master..sysdatabases
WHERE DBID = @pDBID
RETURN @vHash
END
and
USE [KPHOnline]
GO
/****** Object: UserDefinedFunction [dbo].[fn_DBA_ReturnDBHashfromDBName] Script Date: 03/06/2014 22:45:40 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: kphonline.co.uk
-- Create date: 02/06/2014
-- Description: Returns a unique id for a DB since DBID isn't reliable enough. Generates an MD5 hash of the Concatanated DB Name, ID and Created Date
-- =============================================
ALTER FUNCTION [dbo].[fn_DBA_ReturnDBHashfromDBName]
(
@pDBName varchar(128)
)
RETURNS VARCHAR(32)
AS
BEGIN
DECLARE @vHash VARCHAR(32)
SELECT @vHash = CONVERT(NVARCHAR(32),HashBytes('MD5', name+CAST(dbid as varchar(3))+CAST(crdate as varchar(20))),2)
FROM master..sysdatabases
WHERE name = @pDBName
RETURN @vHash
END
I’m sure there may be better way of solving the problem but this approach wordked for me when I needed it :)
]]>Before I started the recovery from my dying hard drive I needed somewhere to put it. All I had available at the time was a 16GB USB Memory Stick which I formatted on the Xbox as an Xbox Storage Device.
Windows 7 doesn’t understand the FATX filesystem on the hard drive so I needed to use a separate piece of software to recover the data. The first one I tried is the awesomely titled Party Buffalo Drive Explorer (PBDE). Connect your Xbox drive and start the software. You might need to run it as an Administrative user to be able to get access to the drive. Now open the connected Xbox drive by going to ‘File > Open > Device Selector’. If you have an image of your Xbox HD instead you can open this instead by choosing ‘File’. Select your drive and press ‘OK’
Once it’s spent a few moments reading your drive you should get a list of folders appear in the left pane:
The core files for each game or application are located in the ‘Data\Content\0000000000000000' folder. Here you will find all the installation files and any downloaded content. I chose to ignore my game installation files as I could always recreate these by installing the games from disk onto a new HD. The folders starting with ‘E’ in this list contain the user specific files. Here would be your settings, avatar items and game saves. There are two users on my Xbox so there are two folders.
For any games where I’d purchased DLC I browsed to the game in question and looked for a folder with the description of ‘Marketplace Content’.
To backup the data to your computer right-click on a folder and select ‘Extract’. Choose a destination for the files and press ‘OK’. You will now witness one of the best progress bars ever as your files are copied. Weee!
Now the data was safely off the Xbox drive I needed to copy it over to the USB drive. I closed PBDE and disconnected the failing Xbox HD. After plugging in the USB drive I started PBDE back up and copied the files to the same path they originally came from. Close PBDE and connect the drive to the Xbox. Use the storage browser on the Xbox to see if the files are back.
If you don’t want to risk damaging your failing XBox HD any more or want to fully back up a working drive you can select the ‘Backup to Image’ option. I found that this crashed every time it got to the damaged section of my HD so I recovered my files by extracting the data from a directly connected drive. I did eventually get an image at a later date by using gddrescue on my Linux laptop which was able to ignore the bad sectors.
Now all you need to do is wait for your new Xbox 360 320GB HD to arrive so you can install of your games again!
Comments from wordpress will appear here once I’ve copied them over 👍
My Xbox 360 slim started behaving strangely the other day. I’d just bought some new games and was going through the process of installing them to the HD. Half way through the install the Xbox crashed and I was back at the profile select screen.
I deleted the half installed game files and tried again. It still crashed and I was back at the profile select screen. This time selecting my profile would cause the Xbox to reset. I tried a different profile and it reset. Even accessing the system menus without selecting a profile would cause it to reset 🙁 As soon as there was HD access the Xbox would die.
My Xbox is an arcade (4GB) version that I later upgraded with the 250GB HD. My profile is located on the 4GB so I removed the HD to see if my profile would now work. It did! This narrowed my reset problem down to the HD. It was either a faulty HD or a possible filesystem problem.
Whatever the issue, I wanted to try to recover as much as I could from the drive and that meant plugging into my PC.
Some initial searching lead me to believe that the Xbox HD is locked so can’t be read in a PC. I saw some Youtube videos showing users booting their original Xbox with the HD attached to unlock it then moving the IDE cable over to their PC. I had the newer Xbox though… and it uses a SATA connection. Other videos showed people using a specific cable from the older Xbox HD caddy to connect their Xbox Slim SATA HD to the Xbox transfer cable. It quickly seemed like too much hassle and cost! So I thought I’d try a different approach B)
I had a cheap 2.5″ SATA HD Case sitting around, like the one pictured on the right, that I’d bought from eBay a while ago.
Opening the case reveals the end with the SATA connections.
I connected this directly to the Xbox SATA drive and plugged it into my laptop. The drive was recognised by the computer but wasn’t mountable as it uses a custom filesystem.
To recover the contents of the drive check out this post: Recovering Data from an Xbox Hard Drive
]]>The repair didn’t take very long and was quicker than dismantling the adaptor! To fix the unit you’ll need the following:
Some comments from my last post suggest it’s possible to use a higher rated capacitor but I’m not sure what the benefit the higher rating brings. Please leave a comment if you can enlighten me 🙂
I used the braid to soak up the solder around the legs of the broken capacitor. Eventually you’ll be able to pull the capacitor off the board. I needed to use the braid a little more to clean the holes to allow the new capacitor in. This was how it looked after removing the faulty part:
I then cleaned up the hole with some flux. I applied the liquid flux I have with a small paintbrush; this helped me to make sure it only went on to the surfaces I intended to solder.
Now you can push the new capacitor in making sure it is the correct way round. The negative side should line up with the white part of the printed guide on the PCB.
On the underside of the PCB let the solder flow into the gap between the leg and the edge of the hole. It should fill the gap and, when cool, provide a nice strong connection. If you push on the capacitor there should be no movement below.
If you are happy with your soldering you need to cut the excess wire from the legs.
Whilst replacing the capacitor I noticed that I needed to carry out another quick repair. Because I hadn’t de-soldered the connections to the pins of the plug the PCB had remained attached during the repair. The regular flexing of these as I replaced the capacitor had caused one of the joints to split away from the metal strip. I managed to fix this joint by applying heat from the soldering iron to it and letting the solder join back onto the metal.
Pushing the PCB back into the case was quite tough as there is very little room to manoeuvre with the network port and reset button getting in the way! After some pushing and shoving I got it back together and plugged it into the power socket. No smoke or flames appeared which is always a good sign after doing work such as this. All three lights then came on and the unit started talking to my network as if nothing had ever gone wrong.
Hopefully, when I upgrade to the 500Mbps versions of these in the future, TP-Link may have a unit that will last a bit longer.
Disclaimer: If you do attempt to open up and fix your Powerline adaptor I cannot be held responsible for any damage that may occur to you, the adaptor or your surroundings. Be careful! Make sure you replace the parts with the same type and rating.
Comments from wordpress will appear here once I’ve copied them over 👍
Within the space of about 6 weeks the adaptors died in the same way, both suddenly not powering on, showing no signs of activity whatsoever. I bought some new devices and went to throw the TP-Link ones in the recycling but, unable to throw them away without knowing why they had stopped working, I decided to open them up and have a look inside.
They were tough to open but I eventually prised them apart where I noticed that both had the same fault! A single failed capacitor was the culprit. I know my way around a soldering iron and a circuit board but only usually enough to re-flow existing solder around a problem joint or broken track but after seeing that the replacement capacitors I needed only cost 99p from eBay I decided to attempt the repair.
Undo the only screw keeping the thing together. It’s located underneath the information label on the plug side. This will break your warranty if you still have one. Only continue if you are out of warranty or happy to never be able to return it.
Once you have removed the screw you can pop off the white plastic cover from the black base. I used a small screwdriver to prise it apart. It was quite tight and I did break one of the plastic clips so be careful 🙁
At this point of the dismantling I wasn’t sure how to go ahead. Something was holding the PCB down to the base of the casing, preventing it from lifting up, but still allowing for some movement. I managed to get the PCB out by pushing the network port side in and pushing up. It took a lot of force and I broke the plastic rod of the push button reset switch but I eventually got it open. This reveals what was stopping the PCB from coming out.
As you can see from the picture the Live and Neutral pins of the plug attach to two metal strips that are soldered to the top edge of the PCB (Yellow). I really should have de-soldered these before attempting to remove the PCB. If you don’t want to risk damaging the network port or reset switch then de-solder these strips before proceeding. You can also see my damaged reset button. I wasn’t too bothered about the damaged reset button as it still worked. I’d need to stick a pen in there in future to reach the switch. One benefit of this is that nobody will accidentally be able to press the switch and reset my network.
You should now have a fully opened adaptor with the PCB accessible on both sides.
I’ll cover the actual repair in my next post and show you just how simple it was.
Disclaimer: If you do attempt to open up and fix your Powerline adaptor I cannot be held responsible for any damage that may occur to you, the adaptor or your surroundings. Be careful! Make sure you replace the parts with the same type and rating.
Comments from wordpress will appear here once I’ve copied them over 👍
I’ve decided to reduce the amount of exposure I have to Google. I simply don’t feel comfortable with how much of my life is tracked and analysed.
I already run a few Firefox addons that should cut back the online breadcrumb trail but it’s still not enough. I found some Firefox addons (NoScript, RequestPolicy) were excellent at what they do but broke many websites due to the site’s reliance on third-party scripts. It’s not the addon’s fault but I don’t have the time to tailor different profiles on a per website basis. I now use Ghostery, Adblock and TACO to reduce the ability for ad networks to track me. But it still wasn’t enough. I’ve used Google forever for my searches and since Gmail launched for email. I championed them when they were the lightweight alternative to Yahoo! and Ask Jeeves. Given their size and reach I don’t want to feed the machine any further unless I explicitly choose to.
I’d recently read about a new search engine called DuckDuckGo and a link on the site highlighted the issue of search engine filtering and bubbling. Although aware of it I’d not really considered how it affected me and I was blissfully unaware in the self delusional world of “I’ll be alright. It doesn’t affect me”. But I’d see the results of bubbling first hand when I had seen me getting different search results depending on the browser or computer I was using. This is why I am choosing to use someone else for my search needs.
I’ve now been using DuckDuckGo for about a month and have been very impressed. It takes a little getting used to and I now much prefer it’s results over those of Google. I’ve had to drop back to Google a couple of times but that is more down to me still thinking the “Google” way when expecting certain things in the results.
The biggest advantage is that the search engine is all about YOU because it is highly configurable and has plenty of tricks up it’s sleeve to help prevent your data leaking when carrying out a search. Have a look at their Privacy Policy. It makes a refreshing read. If that doesn’t convince you have a look at the goodies available to help empower your searches.
My next step is to change from Google Adwords… Wish me luck!
]]>It’s very good at doing what you’d expect; Handling all of the day-to-day formats you tend to come across (such as Zip and RAR) as well as plenty of less well-known or older formats (lha yay!). The interface is clean and the OS integration isn’t very intrusive (and you can remove it anyway if you’re not a fan of items cluttering your context menu).
I’ve always been impressed how I can extract files that don’t even look like archives and use 7zip as an extra security tool. I’ve successfully avoided trojans by extracting an executable file to find the real setup file inside. The wrapper executable was just a delivery vehicle for something malicious.
I’ve also loved the way you can treat an ISO image as an archive and open it up to get at specific files. I store a lot of ISO images on my fileserver so now get the best of both worlds: the original ISO images for faithful reproduction along with the ability to access files as almost as easily as a standard folder.
Finally, the most recent thing I discovered and the reason for this post of praise is the ability to open up and access files in a raw hard disk drive image!
I’d made a backup image of a failing 160GB HD using ddrescue and saved it to my server. I then needed to get access to some files on the image and mounted it using the loop device on Linux. The image has multiple partitions so I found the partition I needed, calculated the offset and mounted it. I then got sidetracked and didn’t get round to getting the files and shutdown the PC.
The next morning as I raced to get out of the door on time for a change I remembered I needed some files and logged onto my Windows computer (which was already on). The backup folder was already open and the disk image showed the ImgBurn logo as the .img extension is associated with that.
Imgburn couldn’t open the file but seeing the 7zip entries in my context menu led me to try the ‘Open archive’ option.
Amazingly it could see all the partitions of the RAW image dump and. It took a minute to do as it read the partition info but I could even double-click on the NTFS partitions to see the individual files and folders contained within.
Brilliant!
]]>