晚上 雕梁 说要找个工具来调查下unix域套接字的发送和接受情况,比如说A程序是否送出,B程序是否接收到,他找了tcpdump ,wireshark什么的,貌似都不支持。
这时候还是伟大的systemtap来救助了。 因为所有的socket通讯都是通过socket接口来的,任何family的通讯包括unix域套接都要走的,所以只要截获了socket 读写的几个syscall 就搞定了.
systemtap发行版本提供了个工具socktop, 位于 /usr/share/doc/systemtap/examples/network/socktop, 是个非常方便的工具, 干这个事情最合适了。
Read more…
我们在系统调优或者定位问题的时候,经常会发现多线程程序的效率很低,但是又不知道问题出在哪里,就知道上下文切换很多,但是为什么上下文切换,是谁导致切换,我们就不知道了。上下文切换可以用dstat这样的工具查看,比如:
$dstat
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
9 2 87 2 0 1|7398k 31M| 0 0 | 9.8k 11k| 16k 64k
20 4 69 3 0 4| 26M 56M| 34M 172M| 0 0 | 61k 200k
21 5 64 6 0 3| 26M 225M| 35M 175M| 0 0 | 75k 216k
21 5 66 4 0 4| 25M 119M| 34M 173M| 0 0 | 66k 207k
19 4 68 5 0 3| 23M 56M| 33M 166M| 0 0 | 60k 197k
#或者用systemtap脚本来看
$sudo stap -e 'global cnt; probe scheduler.cpu_on {cnt<<<1;} probe timer.s(1){printf("%d\n", @count(cnt)); delete cnt;}'
217779
234141
234759
每秒高达200k左右的的上下文切换, 谁能告诉我发生了什么? 好吧,latencytop来救助了!
它的官网:http://www.latencytop.org/
Skipping audio, slower servers, everyone knows the symptoms of latency. But to know what’s going on in the system, what’s causing the latency, how to fix it… that’s a hard question without good answers right now.
LatencyTOP is a Linux* tool for software developers (both kernel and userspace), aimed at identifying where in the system latency is happening, and what kind of operation/action is causing the latency to happen so that the code can be changed to avoid the worst latency hiccups.
它是Intel贡献的另外一个性能查看器,还有一个是powertop,都是很不错的工具.
Read more…
有时候在看系统代码的时候,我们很难从源码中看出我们感兴趣的函数是如何被调用的,因为调用路径有可能太多。用户空间的程序gdb设断点是个好的方法,内核的就麻烦了。这时候systemtap可以帮忙, 比如:
Read more…
在运行stap的时候,经常会发现在脚本结束运行的时候打出了很多无预期的东西,仔细一看都是些全局变量的dump, 这个问题比较烦人.
我来演示下:
Read more…
我们在使用脚本收集系统信息的时候经常会用到map这样的数据结构存放结果,但是stap脚本在使用过程中经常会提升说”ERROR: Array overflow, check MAXMAPENTRIES near identifier ‘a’ at test.stp:6:5″ 类似这样的信息,然后脚本就自动退出了.
这是stap运行期为了避免用户滥用系统资源做出的保护,为了安全性牺牲下方便,但是会给我们需要长期运行的脚本造成很大的麻烦,所以我们演示下如何来回避这个事情:
Read more…
刚刚雕梁同学告诉我google刚刚开源了他自己家用的快速压缩算法,AKA Zippy, 看来下貌似不错。
项目主页: http://code.google.com/p/snappy/
Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more.
Snappy is widely used inside Google, in everything from BigTable and MapReduce to our internal RPC systems. (Snappy has previously been referred to as “Zippy” in some presentations and the likes.)
Read more…
Recent Comments