From: Ohmster on
Bit Twister <BitTwister(a)mouse-potato.com> wrote in
news:slrnhptulp.69a.BitTwister(a)cooker.home.test:

>> This is what is really, really wrong with it and makes it not
>> useable: 1.) urls need to be one per line, list fashion, rather than
>> one, huge block of unbroken text. A carraige return needs to be
>> inserted after each url to drop down to next line.
>
> Then you change html controls to use list instead of what I picked.

Oh Bit, please do not get mad at me, this is not critisism or non-
appreciation. I do appreciate the script and the education very, very
much. :)


>> 2.) It is not necessary or desirable to include the pwd in the html
>> file.
>
> Hey, it is your code on your system, you take it out. :-)

Of course.

>> 3.) This really makes no sense, I see the word "Raton" as in Boca
>> Raton repeated over and over, each one is a url to
>> file:///Z:/bench/Raton
>
> Unless I can see the file, I can not see the fix.

I will supply all the files you need on my comcast personal web server
space if that will help. Will give the shortcuts, and the script and
output files for you to examime. Thanks!


>> 4.) There are many, many baseurl links that are useless, for example:
>> design.url:BASEURL
>
> Need to see the shortcut file.

They are coming!

>> 5.) This is a *way* cool idea but I think the problem is that you
>> have no desktop shortcuts to practice your script on, and I am quite
>> sure that your code is based on sound principals. To make this easier
>> if you would like to try out your own script, I have provided a zip
>> file of all of the desktop shortcuts for you to experiment with or at
>> least try the script with. Here you go:
>> http://www.ohmster.com/~ohmster/temp/desktopshortcuts.zip
>
> Ok, try this one

Sending what you need to see what needs to happen.

http://home.comcast.net/~theohmster/files/desktopshortcuts.zip
http://home.comcast.net/~theohmster/files/BT_Files.zip

Okay, there are the desktop shortcuts and there are the 3 Bit Twister
files. The one that contains the script, the text output, and the html
output. Help me please Bit Twister, you are so close man!

Thank you!


--
~Ohmster | ohmster59 /a/t/ gmail dot com
Put "messageforohmster" in message body
(That is Message Body, not Subject!)
to pass my spam filter.
From: John Hasler on
This should work:

grep '^URL=http://' *.url | cut -f2- -d= > urls

The change is the addition of a dash after "-f2". It tells cut to print
the second field and all subsequent fields rather than only the second
field.
--
John Hasler
jhasler(a)newsguy.com
Dancing Horse Hill
Elmwood, WI USA
From: Bit Twister on
On Mon, 15 Mar 2010 22:39:06 -0500, Ohmster wrote:
> Bit Twister <BitTwister(a)mouse-potato.com> wrote in
> news:slrnhptulp.69a.BitTwister(a)cooker.home.test:
>
>>> This is what is really, really wrong with it and makes it not
>>> useable: 1.) urls need to be one per line, list fashion, rather than
>>> one, huge block of unbroken text. A carraige return needs to be
>>> inserted after each url to drop down to next line.
>>
>> Then you change html controls to use list instead of what I picked.
>
> Oh Bit, please do not get mad at me, this is not critisism or non-
> appreciation. I do appreciate the script and the education very, very
> much. :)


Hehehe, not getting mad. You have to decide to make the cosmetic changes.

>>
>> Need to see the shortcut file.
>
> They are coming!
> http://home.comcast.net/~theohmster/files/desktopshortcuts.zip

Got them

> Sending what you need to see what needs to happen.


> http://home.comcast.net/~theohmster/files/BT_Files.zip

No idea why I cannot unzip that one.
[bittwister(a)cooker zip]$ unzip BT_Files.zip
Archive: BT_Files.zip
skipping: urls.html `PPMd' method not supported
skipping: urls.txt `PPMd' method not supported
skipping: geturls `PPMd' method not supported

Does not matter. I can see the urls.html from last script I sent
using your zipped desktopshortcuts.zip


Lots of them work. Some you may have to tweak in the code.
You will need to provide broken url for me to look at. Latest copy of
code for us to work with.


#!/bin/bash
_out_fn=urls.html

echo "<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">
<!-- save as index.html -->
<!-- Enter file:$_out_fn in browser Location: -->
<HTML>
<HEAD>
<TITLE>Local Home Page $_out_fn </TITLE>
<TR>
<!-- Background wheat, links blue (unvisited), navy (visited), red (active) -->
<BODY
BGCOLOR=\"Wheat\"
LINK=\"Blue\"
VLINK=\"Red\"
ALINK=\"Green\"
TEXT=\"Black\"
<H4 ALIGN=\"CENTER\">
$PWD
<ul>

" > $_out_fn

grep --no-filename BASEURL= *.url > urls.txt

while read -r _line ; do
set -- $(IFS='=' ; echo $_line)
shift
echo "<li><A href=\"$*\">$*</A>" >> $_out_fn
done < urls.txt

echo "
</li>
</ul>
</h1>
</BODY>
</HTML>
" >> $_out_fn

echo "Output in $_out_fn"
From: Ohmster on
John Hasler <jhasler(a)newsguy.com> wrote in
news:87aau9szrp.fsf(a)thumper.dhh.gt.org:

> This should work:
>
> grep '^URL=http://' *.url | cut -f2- -d= > urls
>
> The change is the addition of a dash after "-f2". It tells cut to print
> the second field and all subsequent fields rather than only the second
> field.

Wow, that really worked. I was afraid that I would have to use the first
output and redo it all over by hand because I don't know which urls are
good and which are not. But this second version really seemed to do the
trick, and on time too! I can now get this job done in a timely fashion,
thank you very much John.

I am so anxious to see what Bit Twister comes up with. He is working so
hard on an html version that would be really cool, but it has too many bugs
in it as of right now. I cannot say I blame him, he has nothing to work
with so maybe once he gets a chance to see the url files that I have to
work with, his script may be salvageable. I hope so, it sure does look
promising, plus he did a great job on commenting it too so that I can learn
from it. But I will hand it to you John, you got the job done as a text
file and I can work with that. Thanks and goodnight.

--
~Ohmster | ohmster59 /a/t/ gmail dot com
Put "messageforohmster" in message body
(That is Message Body, not Subject!)
to pass my spam filter.
From: Bit Twister on
On Mon, 15 Mar 2010 22:57:26 -0500, Ohmster wrote:
>
> I am so anxious to see what Bit Twister comes up with.

Here is the script using John's grep command.

#!/bin/bash
_out_fn=urls.html
_in_fn=urls.txt

echo "<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">
<!-- save as index.html -->
<!-- Enter file:$_out_fn in browser Location: -->
<HTML>
<HEAD>
<TITLE>Local Home Page $_out_fn </TITLE>
<TR>
<!-- Background wheat, links blue (unvisited), navy (visited), red (active) -->
<BODY
BGCOLOR=\"Wheat\"
LINK=\"Blue\"
VLINK=\"Red\"
ALINK=\"Green\"
TEXT=\"Black\"
<H4 ALIGN=\"CENTER\">
<ul>

" > $_out_fn

grep --no-filename '^URL=http://' *.url | cut -f2- -d= > $_in_fn

while read -r _line ; do
echo "<li><A href=\"$_line\">$_line</A>" >> $_out_fn
done < $_in_fn

echo "
</li>
</ul>
</h1>
</BODY>
</HTML>
" >> $_out_fn

echo "Output in $_out_fn"