Auto blocking attacking IP address

This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here:

In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose:

1, You are running a Windows Server 2008 facing the Internet.

2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two.

3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month.

4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything.

5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall.

6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS.

7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)


 #Dong Xie, March 2012
 #my simple code to monitor attack and deal with it
 #Windows Server 2008 Logon Type
 #8: NetworkCleartext, i.e. FTP
 #10: RemoteInteractive, i.e. RDP
 $tick = 0;
 "Start to run at: " + (get-date);
 $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";
 $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";
 while($True) {
  $blacklist = @();
  "Running... (tick:" + $tick + ")"; $tick+=1;
 #Port 3389
 $a = @()
 netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `
   $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "") {$a = $a + $ip;} }

 if ($a.count -gt 0) {
   $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `
     $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique

   foreach ($ip in $a) { if ($ips -contains $ip) {
     if (-not ($blacklist -contains $ip)) {
       $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;
       "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;
       if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}
 $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.
    #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.
 $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `
             $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;
 if ($count -gt 50) #threshold
    $ips = @();
    $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `
      | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `
       | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `
       | where {$_.Count -ge 10} | select -ExpandProperty Name;
    $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `
      | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `
       | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `
       | where {$_.Count -ge 10} | select -ExpandProperty Name;
    $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne ""} | Sort-Object | Get-Unique;
    foreach ($ip in $ips) {
      if (-not ($blacklist -contains $ip)) {
       "Found attacking IP on FTP: " + $ip;
       $blacklist = $blacklist + $ip;
 #Firewall change

<# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>
 foreach ($ip in $blacklist) {

   $fw=New-object –comObject HNetCfg.FwPolicy2; #
   $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?

   if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))
     {"Adding this IP into firewall blocklist: " + $ip; 
 Wait-Event -Timeout 30 #pause 30 secs
 } # end of top while loop.


Further points:

1, I suppose the server is listening on port 3389 on server IP: and, you need to replace that with your real IP.

2, I suppose you are Remote Desktop to this server from a workstation with IP: Please replace as well.

3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning.

4, FTP is checking the log for attack only to the last 5 mins, you can change that as well.

5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly.

6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish.

7, the code runs in a loop, you can set the loop time at the last line.

To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it.

This is what you see when it’s running:


This is when it detected attack and adding the firewall rule:


Regarding the design of the code:

1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy.

2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files.

3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.


Now the final bit:

1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message.

2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB.

3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long.

4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market!

Enjoy the code!

p.s. (PowerShell is REALLY REALLY GREAT!)

Posted by dong | 11 comment(s)

EXAM VOUCHER, one left.

I still have one free exam voucher left from the Launch Tour 2005. You can use it to take one of the following exam:

70-431, 70-536, 70-551, 70-552, 70-553, 70-235

The voucher expires on 31-12-2006, that means you must take it before the new year. I'm sure the Prometric centre at Ealing, London will be open to book. Most of other testing centres will be either fully booked or closed.

Email me or leave comments, then I'll send you the code.

Merry Christmas and Happy New Year!

Posted by dong | with no comments

2.3 GByte/s, can your SAN do that?

Recently I built a new server to run our beloved SQL Server 2005. The design idea came from Dr. Jim Gray's server brick and SQL Server Magazine March 2006 article "Pare Down and Power Up". Major difference with them are: I choose to use SAS JBOD and SAS RAID controller, and SAS drive. So that the RAID-1, RAID-0, RAID-10 jobs are offloaded from server CPU.

Here comes the proof screenshot during my configuration testing with IO Meter:


Figure 1, IO Meter screenshot, 2.37GB/s Reading speed, at a CPU load 4.75%

This is by far the maximum sustainable number I can achieve, although I believe the system could do better, at least in theory. You may argue that I won't get this fast in production, and yes, I agree, only half the spindle will goes for a Very Large Table File Group, others goes for LOG, TempDB, etc. But still, doing a table scan at 1GB/s for a 1TB sized table only need 15 mins.

Here is the challenge for the lucky SAN owners: please use the comment area to show people that your expensive SAN can do better than this. Information needed:

  • Your SAN vendor,
  • Model,
  • Peak speed,
  • Price tag

If anyone can kindly offer some prize, then we can run this as a competition! (Note: I hope Santa is browsing our Blog.)

Posted by dong | 3 comment(s)
Filed under: ,

How To: BULK INSERT your User Defined Types (UDT)?

The task is to bulk load data from flat files into tables with UDT column. This blog post will show you the problem and the solution.

We will use the Point type as an example, which you can find the source code from the Books On Line (BOL), or from MSDN web, or from my blog file folder.

Suppose you've either compiled your source code into a DLL by using Visual Studio, or you downloaded it, the next step is to open SSMS, open a Query window, use TempDB or create a TEST database, then run these SQL statements:


FROM 'D:\DongDoc\SQL Server Management Studio\Projects\UDT_Point\UDT_Point.dll'




EXTERNAL NAME Point.[Point];






Col2 Point NULL




Now we stuff our Table T1 with some value:


INSERT INTO dbo.T1 (Col1, Col2) VALUES ('TEST', CONVERT(Point, '3,4'));

INSERT INTO dbo.T1 (Col1, Col2) VALUES ('TEST', CONVERT(Point, '1,5'));

INSERT INTO dbo.T1 (Col1, Col2) VALUES ('TEST', CAST ('1,99' AS Point));

By doing CAST/CONVERT from string to UDT type, CLR Engine will call Point class' Parse(SqlString s) method. If now we do a SELECT *, the result is like:

Col1 Col2
---------- --------------------
TEST 0x008000000380000004
TEST 0x008000000180000005
TEST 0x008000000180000063

At this point, if you are a programmer, you should realized that UDT in the database is to persist object instances into Binary/Byte format. So these insert will success as well:

--INSERT Binary Representation of the UDT with type cast


CAST(0x008000000380000004 AS Point));


--INSERT Binary Representation of the UDT without cast



 Now we prepare a text file to try BULK INSERT:

Data1.txt (save as Tab Delimited)



TEST 0x008000000380000004
TEST 0x008000000180000005
TEST 0x008000000180000063

The BULK INSERT statement:



'D:\DongDoc\SQL Server Management Studio\Projects\UDT_Point\Data1.txt'




 This is the error msg you got:

Msg 4863, Level 16, State 1, Line 3
Bulk load data conversion error (truncation) for row 1, column 2 (Col2).

Believe me, this is not a format file issue, you'll get similar errors if you provide a format file. But this will work:

Data2.txt (save as Tab Delimited)


TEST 008000000380000004
TEST 008000000180000005
TEST 008000000180000063

This exactly proves how BULK INSERT works inside SQL server, i.e. BULK INSERT sends the parsed Rowset to Query Engine directly, won't be bothered to do anything else. (For a good explanation on BULK INSERT/BCP, please go to, look for April 2004 issue, topic "Bulk Loading Data into SQL Server 2000".)

But our task, if you still remember, is to load a file like this:

Data3.txt (save as Tab Delimited, can't be a csv, right?)


TEST 3,4
TEST 1,5
TEST 1,99

 Please spend your time to play with this, see if you can get it BULK INSERTed, with or without format file. (In case you really got through, please let me know immediately, even it's 4am in the morning.)


The reason according to me: all bulk operations are designed for a single purpose, i.e. -- BULK operation. So BULK INSERT won't invoke Point.Parse(SqlString s) at all, it just waiting there for you to give it some binary to pass to the Engine to write to the data file without any further change. This is actually provable, by using Debug technique, e.g. set break point inside Parse(...) method, then fire up BULK statement inside SSMS.







BULK 'D:\DongDoc\SQL Server Management Studio\Projects\UDT_Point\Data3.txt'

,FORMATFILE = 'D:\DongDoc\SQL Server Management Studio\Projects\UDT_Point\format.fmt')


 format.fmt (save as Tab Delimited)



1 SQLCHAR 0 10 "\t" 1 Col1 SQL_Latin1_General_CP1_CI_AS
2 SQLCHAR 0 20 "\r\n" 2 Col2 SQL_Latin1_General_CP1_CI_AS

 (note: the length of 20 is somewhat arbitrary).

 According to Sunil Agarwal of SQL Server Storage Engine Group at Microsoft, who is responsible for concurrency, indexes, tempdb, LOBS, supportability, and bulk import/export, this could be the only solution for our task. The original discussion with him can be found here. Also according to Sunil, this solution is still a streamed process, not a staged one. Yet beware the performance impact, for a million Point you have in the flat file, that's a million call to Point.Parse(), try write your code well.

 So our hero who comes to the rescue is: OPENROWSET(BULK) statement. Enjoy!

Posted by dong | with no comments
Filed under:

SQL IO Simulator

This is the new tool to replace SQLIOStress. 

Link to SQL IO Simulator

Posted by dong | with no comments
Filed under:

Confusing Bug: ASP.NET customer control refuse to work

I was converting a 2003 web application into VS 2005 project. Hundreds bugs turned up. I know that some of the bugs are not real, they are there only because other real bugs, but the compiler will complain them anyway. Here comes the Wisdom to choose which file to fix first, sadly, I haven't got much of it.

Noticed an error message complaining about this line in a .ascx file:
<MyTag:MyControl1 id="control1" runat="server" />

I found the Directive line at the top:
<%@ Register TagPrefix="MyTag" Namespace="MyNameSpace" Assembly="MyAssembly" %>

Since my control code is inside the \App_Code folder now (that's new in VS 05), there won't be an DLL file to refer to, so I removed the Assembly="MyAssembly" pair. Still, it doesn't work. Stared at the code for a long time, can't figure any error there. So I created a new web project, added the control code into \App_Code, declared my tag, it WORKs.

Only at this point, my rational thinking comes back from a beach holiday (I guess). Since \App_Code is holding lots of code files, VS 05 must be compiling it to a temp place, i.e. the whole folder into a single DLL, to be referenced by webforms, if a single bug in any files prevents this DLL to be compiled, then you just don't have that dll, then, you have nothing at all.

So, my customer control code is bug free, .ascx file too. It's some other files I should look at, really!
Posted by dong | with no comments
Filed under: ,

Montecito released! Finally!

Intel announced the release of the first dual core Itanium 2 chip: code named Montecito, about yesterday. Although not very much the same specification as disclosed two months ago, e.g. the MHz, FSB, etc. But I think it will do good compared with the single core old Madison.

Bull has the new model with details on their website. Unisys also has news about that. But nothing was mentioned on HP's web, if you consider ~80% of Itanium boxes sold are HP branded. Oh, I did find one thing there under the Offer section, seems HP is trying to clear stock with a Buy one (CPU) get one free offer.

That all said, I personally love the new  Bull NovaScale 3045, a 4U 4 processor system. The reason is it's new chipset. Montecito chip is dual core, HT enabled, that's 4 threads in total, a data crunch monster really. How to feed enough food to it is a big issue, if you aware that Intel hasn't been able to embed the memory controller into the CPU, thus a Northbridge chip is there, also the Front side bus (FSB), the limiting bottle-neck.

Figure 1. Intel's 8870 chipset diagram

Figure 2. Intel's 8870sp chipset.

If you look carefully, you'll notice that, both chipsets are providing a single FSB to FOUR Montecito, that's 16 threads in total. You'll understand what I'm saying here if you ever drive at 4pm on M25.

Bull's 3045 is using a new custome build chipset, with, Two FSB. So each FSB only deal with 2 processors. (Other series from Bull hasn't got this far). Shipping time: August.

This is really a long wait. I have been trying to get a Montecito box since Mar 2005, when I was at the 64bit trainning course at Microsoft Chertsey... ...
Posted by dong | with no comments
Filed under:

SSIS Lookup Transformation Dynamic SQL workaround

In SSIS, Lookup Transformation is really an important tool. But the design of it prevents user to set the SQL statement dynamicly.

In my work, I need to prepare a text file for BULK loading into table. The original csv file could be 10 million ~ 1 billion rows. On the way I need to lookup several IDs by Names from different tables. These reference tables all have a Project_ID column, which can sub-grouping data for a specific project. Normally the original csv file only holds data for a single project.

Of cause I can just do:

SELECT xx_ID, xx_Name FROM dbo.TABLE1

to fill the Lookup Tran with full cache. But doing this will: use lots of memory; waste resource; and bad performance.

I would rather want to do:

SELECT xx_ID, xx_Name FROM dbo.TABLE1 WHERE Project_ID = ?

Which the Project_ID value is only known at runtime by user input parameter. Unfortunately Lookup Tran does not have that enabled (yet, maybe will in future version.)

So I came up with this idea: BCP dump + OPENROWSET (BULK).

First we use an Execute Process Task, to call Bcp.exe, specify the Arguments using Expressions, the Project_ID will be a SSIS variable. This is like you run this line in CMD:

c:\Bcp.exe "SELECT xx_ID, xx_Name FROM dbo.TABLE1 WHERE Project_ID = 101" queryout tempref1.txt -N -T

Then you will get a text file with columns you want, for the project that is relevant to this run. The reason to specify -N is to save char like in unicode and everything else in SQL server native format, this can save a lot both on file size and I/O speed for later reading process.

Second step, configure our Lookup Trans by set it to Use results of an SQL query, the query should be like this:

SELECT a.* FROM OPENROWSET( BULK 'c:\tempref1.txt', FORMATFILE = 'c:\tempref1.fmt') AS a

I didn't mention the step to create the format file.

If the SQL statement is correct and both the txt and fmt files are ok, when you click the Preview button, you'll see a data grid. Then just continue your normal configuration of Lookup Trans.

So far this works for my situation. I can do multiple Lookups by Bcp to different file name. Obviously I can't fire two instances of the package at the same time. Also I can't set the path and file name at runtime too.

Any comments are welcomed! (Maybe I can do a sample package if anyone interested, in Northwind or AdventureWorks context?)
Posted by dong | 2 comment(s)
Filed under: